Last Update 1:12 PM October 02, 2024 (UTC)

Company Feeds | Identosphere Blogcatcher

Brought to you by Identity Woman and Infominer.
Support this collaboration on Patreon!

Wednesday, 02. October 2024

Ocean Protocol

Season 6 of the Ocean Zealy Community Campaign!

We’re happy to announce Season 6 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members. 💰 Reward Pool 5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀 📜Program Structure Season 6 of the Ocean Zealy Community Campaign will feature more engaging tasks and ac

We’re happy to announce Season 6 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members.

💰 Reward Pool

5,000 ($FET) tokens that will be rewarded to the Top100 users in our leaderboard 🚀

📜Program Structure

Season 6 of the Ocean Zealy Community Campaign will feature more engaging tasks and activities, providing participants with opportunities to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

⏰Campaign Duration: 31st of October 12:00 PM UTC

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/cw/onceaprotocol/questboard

Season 6 of the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Elliptic

Crypto regulatory affairs: US Treasury targets Russia-linked crypto exchanges in cybercrime crackdown

On September 26, the United States Treasury took action to shut two Russia-linked cryptoasset exchanges out of the financial system owing to their role in facilitating money laundering for cybercriminals and fraudsters. 

On September 26, the United States Treasury took action to shut two Russia-linked cryptoasset exchanges out of the financial system owing to their role in facilitating money laundering for cybercriminals and fraudsters. 


Ontology

Ready to Hack the Future of Digital Identity? Join Ontology’s Challenge at the DIF Hackathon 2024

If you’re a developer who wants to help shape the future of decentralized identity — and maybe take home part of a $70,000 prize pool — the Decentralized Identity Foundation (DIF) Hackathon 2024 is where you need to be. Running from October 1st to November 4th, this event brings together creators, innovators, and coders from around the world to redefine digital identity through cutting-edge soluti

If you’re a developer who wants to help shape the future of decentralized identity — and maybe take home part of a $70,000 prize pool — the Decentralized Identity Foundation (DIF) Hackathon 2024 is where you need to be. Running from October 1st to November 4th, this event brings together creators, innovators, and coders from around the world to redefine digital identity through cutting-edge solutions.

Ontology is stepping up with a challenge that puts the spotlight on ONT Login, our decentralized authentication tool. We’re inviting developers to take it for a spin, build something amazing, and demonstrate just how easy it is to integrate decentralized identity into real-world applications. Whether you’re building for Web2 or Web3, ONT Login is here to make decentralized authentication simple and secure.

The Hackathon: A Platform for Innovation

The DIF Hackathon is no ordinary coding event. With tracks covering Education, Reusable Identity, Travel, and Zero Knowledge Proofs (ZKPs), this hackathon offers endless opportunities for both seasoned developers and newcomers to showcase their skills. Plus, with $70,000 up for grabs, this is the perfect chance to innovate, collaborate, and push the boundaries of what’s possible with decentralized identity.

Ontology’s Challenge: Show Us What You Can Build with ONT Login

At the core of Ontology’s hackathon challenge is ONT Login, a decentralized universal authentication solution that empowers developers to integrate secure, privacy-first login functionality into their apps. With ONT Login, users can log in seamlessly without sacrificing control over their data — a crucial step forward in a world where privacy is increasingly under attack.

We’re challenging you to:

Create a repository for ONT Login’s technical documentation or SDKs to help make the tool even more accessible to the broader developer community. Show us a demo of ONT Login integrated into an existing app, adding it as one of your user login methods. Whether it’s a Web2 or Web3 app, we want to see ONT Login in action, providing a glimpse into a future where decentralized authentication is the norm, not the exception. Why ONT Login Matters

In the age of constant data breaches, the need for reusable, self-sovereign identity has never been greater. ONT Login is fully open-source and supports multi-SDKs, making it flexible enough to fit into any project while protecting user privacy. It’s not just a technical solution — it’s a statement. A statement that users deserve control over their personal data, and that decentralized identity can deliver on the promise of a more secure digital future.

Need some resources to get started? You’ve got everything you need right here:

Official Website Documentation Back-end SDK Front-end SDK

Need more help? Join Ontology’s Hackathon Challenge presentation for a detailed walkthrough of ONT Login’s integration:

Join Our Session

For specific support during the hackathon, hop into our dedicated Ontology Support Discord channel:

Join Ontology’s Discord How to Join the Hackathon

Ready to get involved? Here’s how:

Register for the DIF Hackathon on DevPost. Check out the DIF Hackathon details and sign up for educational sessions to help sharpen your skills. Join the DIF Discord community to connect with other developers, share ideas, and get feedback.

This is your chance to collaborate, learn, and make a real impact on the future of digital identity. And if you’re lucky, you might walk away with a piece of the prize pool too.

Why You Should Care

Let’s be real — our digital identities are under constant threat. Every week, there’s another data breach, another scandal involving companies selling our personal information to the highest bidder. ONT Login is Ontology’s answer to that mess. It gives users control over their own data, ensuring privacy without compromising convenience. And now, we’re handing it over to you, the developers, to show the world what decentralized authentication can really do.

This isn’t just about winning a hackathon. It’s about proving that we don’t have to settle for the status quo in digital identity. It’s about showing that privacy and security can coexist with ease of use, and that decentralized identity is the way forward.

So, what are you waiting for? Join the DIF Hackathon, take the ONT Login Challenge, and be part of the next wave of innovation in decentralized identity.

Happy hacking, and may the best builder win.

Ready to Hack the Future of Digital Identity? Join Ontology’s Challenge at the DIF Hackathon 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


IDnow

5 reasons why digital identities will revolutionize business in 2025 and beyond.

Companies are transforming the way their customers access and reuse their services. Is it time you explored reusable identities?  Managing customer accounts and customers’ online identities has become a major challenge for businesses and customers alike.   While there have been significant strides over the last decade regarding remote identity verification, with a variety of automate
Companies are transforming the way their customers access and reuse their services. Is it time you explored reusable identities? 

Managing customer accounts and customers’ online identities has become a major challenge for businesses and customers alike.  

While there have been significant strides over the last decade regarding remote identity verification, with a variety of automated and expert-led identity verification solutions now available, most options still require repeat verification. For example, each time a customer wishes to open an account or register for a highly regulated service, they need to complete an identification process, which is far from ideal from a business or customer experience standpoint. 

However, with digital identity solutions like IDnow’s YRIS, customers can verify and reuse their identity instantly and effortlessly across a multitude of industries and regulated services. 

Plus, with YRIS’s recent Substantial Level of Assurance (LoA) certification, it is now recognized as providing a level of identity assurance equivalent to face-to-face verification.

Government green light. Which countries have digital identities? 

There are currently 2.5 billion people around the world with digital identities, but that figure is expected to reach 4 billion by 2026. India lays claim to the world’s largest biometric ID system. Aadhaar is used by 99% of Indian adults (1.3 billion people) for tasks like opening bank accounts and obtaining SIM cards.  

In Europe, one of the first nations to fully embrace digital identity verification was Estonia. In fact, E-ID and its digital signature service was launched over 20 years ago, in 2002. Use cases include e-prescriptions, and i-voting, which is used by approximately 33% of Estonians, wherever they are, to cast votes. 

The success of a nation’s digital identity strategy comes largely down to how strongly the private and public sector pushes for its adoption. For example, in France, digital identity is driven by a combination of ambitious public initiatives and innovative private solutions like YRIS.

Identity crisis? The future of digital identity in the UK. Download to discover what the the Digital Identity and Attributes Trust Framework means for the future of identity verification in the UK. Get your free copy

The UK’s digital identity approach has been more hesitant and nuanced. In 2022, it released its Digital Identity and Attributes Trust Framework – a set of rules and standards designed to establish trust in digital identity products. 

There are hundreds of use cases across the public and private sectors where digital identities could be used to optimize the user experience. For example, account opening in Banking, compliance checks in Crypto, age verification in Mobility, streamlined check-in processes in Travel, financial risk checks in Gambling, and contract signing in Telecommunication. The UK government has decided to start off with just a few use cases: Right to Work, Right to Rent, and DBS Checks. 

As these solutions improve and their adoption grows, it is highly likely that digital identity will soon become the standard for accessing a multitude of online services, both in the public and the private sector. 

The world of economic, societal and political possibilities opened by digital identities is vast and varied. According to research from the McKinsey Global Institute, countries that implement digital identities could unlock between 3-13% of GDP by 2030. Broad adoption of an interoperable digital ID system will increase inclusion, and provide greater access to finance, health and other essential services. 

Check out below for the top five benefits of digital identity usage for businesses and users. 

1. Enhanced security and protection against identity theft. 

One of the main advantages of digital identity lies in the enhanced security it provides. With advanced methods like multi-factor authentication and the use of biometrics, the risks of fraud and identity theft are significantly reduced. These technologies ensure that the user is who they claim to be, thus limiting the chances of identity theft. 

In addition, digital identity helps to better protect users’ sensitive personal data. By centralizing the management of this data and relying on strict security standards, companies can ensure better protection of personal information, which is crucial in a world increasingly exposed to cyber threats. 

2. Improved operational efficiency.   

Along with strengthening security, digital identity also improves operational efficiency within organizations by automating processes and reducing processing times, such as during new client onboarding. 

By using centralized platforms for identity management, businesses can reduce costs associated with manual management. These gains in efficiency improve productivity and tracking and reduce human errors. 

3. Optimized user experience.   

One of the most appealing aspects of digital identity is the improved user experience. With solutions such as Single Sign-On, users can access various services without having to juggle multiple usernames and passwords. This simplifies their access to platforms while minimizing friction during online transactions. 

Digital identity also allows for greater personalization of services and interactivity. By better understanding the user, businesses can offer tailored services that meet their specific needs. This enhances customer loyalty while providing a seamless and intuitive experience, resulting in overall higher conversion rates. 

4. Adhere to regulatory compliance. 

Certifications such as the LoA guarantee an equivalent to face-to-face identity verification, while compliance with the eIDAS regulation (electronic IDentification, Authentication and trust Services) and certification by ANSSI (French Cybersecurity Agency) ensure digital identity solutions follow the highest European standards in terms of security and personal data protection. 

Businesses that comply with rules, regulations and laws reassure their clients that they’re taking the confidentiality of their personal information seriously. Compliance with European regulations is a strong testament to the rapid adoption of digital identity in sectors such as finance, healthcare and public administration. 

5. Undeniable competitive advantage.  

Digital identity offers a significant competitive advantage. By using a solution like YRIS, it allows access to new technologies and innovative solutions, while providing the flexibility needed to quickly adapt to market changes. 

By offering a smooth and secure user experience, businesses can more easily retain customers, who are more likely to return to services where they feel confident, where their identity is protected and where the experience is simple and intuitive. 

YRIS: Transforming the digital future of France.  

Digital identity, powered by solutions like YRIS, is profoundly transforming the management of online identity in France and beyond. With enhanced security, compliance with European regulations and advanced fraud protection, YRIS stands out as a simple and trusted solution. Available 24/7, it optimizes operational efficiency through automated processes, unlimited reuse of identity and a simplified user experience. 

 By easily integrating with existing systems and addressing an inclusive customer base, YRIS offers a comprehensive response to modern digital identity management challenges. 

Discover more about the benefits of digital identities in our blog, ‘Why the UK is banking on digital identity in 2023’. 

By

Mallaury Marie
Content Manager chez IDnow
Connect with Mallaury on LinkedIn


Finema

This Month in Digital Identity — October Edition

This Month in Digital Identity — October Edition Welcome to the October edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. We’ll delve into significant advancements in decentralized identity, the balance between regulation and privacy, the role of biometric tech
This Month in Digital Identity — October Edition

Welcome to the October edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. We’ll delve into significant advancements in decentralized identity, the balance between regulation and privacy, the role of biometric technology in hiring compliance, and the establishment of security standards for digital ID wallets in the EU.

Here’s a closer look at the essential topics we’ll be covering:

Advancing Decentralized Identity with the SLAP Framework

Velocity Network has dedicated the past five years to developing the Internet of Careers, focusing on essential business needs through the SLAP framework. This innovative approach emphasizes four critical components:

Survivable Credentials: These credentials are designed to remain valid and accessible over time, ensuring that users can reliably present their identities without facing barriers. Legal Risk Mitigation: By addressing potential legal challenges associated with identity verification, organizations can significantly reduce their exposure to regulatory pitfalls, fostering a more secure environment for both users and businesses. Accreditation for Issuers and Relying Parties: Establishing trusted standards for all participants in the identity ecosystem helps to enhance credibility and build trust among users. Practical Privacy: Prioritizing user privacy ensures that individuals maintain control over their personal information, which is essential in today’s digital landscape.

Velocity Network’s collaborative efforts invite stakeholders from various sectors to contribute to effective decentralized identity solutions. By working together, we can empower individuals with greater control over their identities and foster a more inclusive digital ecosystem.

Navigating the Tension Between Decentralized Identity and Regulation

In the ever-evolving digital landscape, the interplay between decentralized identity and regulatory frameworks has become increasingly critical. High-profile cases such as Silk Road and Tornado Cash highlight the challenges of balancing innovation with compliance.

To address these challenges, it is essential to adopt a balanced approach that fosters the development of decentralized reputation systems. Such systems can empower self-regulation while ensuring both privacy and accountability. By leveraging anonymous identities, we can create a framework where individuals have control over their digital presence while participating responsibly in digital platforms.

This approach not only enhances user empowerment but also helps build trust within communities. Learning from past experiences with regulatory challenges can inform the design of more resilient and adaptable decentralized identity systems. By understanding the nuances of this complex relationship, we can pave the way for innovative solutions that respect both freedom and the need for regulation.

Enhancing Hiring Compliance in the UK with Yoti Biometrics

Yoti is making significant strides in the UK hiring landscape by integrating biometric technology with Sterling’s background checks. This partnership aims to streamline compliance and enhance the security and accuracy of identity verification during the hiring process.

By utilizing Yoti’s biometric solutions, employers can simplify the compliance process, ensuring that they meet regulatory requirements efficiently. This integration not only reduces the risk of non-compliance but also enhances security, making it more difficult for fraudulent activities to occur.

Candidates benefit from this system as well, enjoying a smoother onboarding experience. The biometric verification process is designed to be quick and user-friendly, allowing job seekers to complete identity checks seamlessly. This innovative approach not only improves the overall efficiency of the hiring process but also instills greater confidence among employers and candidates alike.

As organizations increasingly recognize the value of biometric technology, Yoti’s integration with Sterling’s background checks stands as a promising development for the future of hiring compliance in the UK.

ENISA to Launch Cybersecurity Certification Scheme for EU Digital ID Wallets

In a significant move to bolster security in the digital identity landscape, the European Union Agency for Cybersecurity (ENISA) is set to establish a cybersecurity certification scheme for the EU’s digital ID wallets. This initiative aims to ensure that digital identity solutions meet high standards of security and trustworthiness, thereby promoting consumer confidence in these technologies.

The certification scheme will provide a robust framework for assessing and validating the security measures implemented in digital ID wallets. By aligning with EU regulations and standards, this initiative supports the broader strategy of creating a secure and interoperable digital identity ecosystem within the EU.

ENISA emphasizes the importance of collaboration with various stakeholders, including industry leaders and governmental bodies, to develop a comprehensive certification process. This collaborative approach is crucial for addressing the diverse needs and challenges in the digital identity landscape.

By fostering trust in digital identity solutions, this initiative paves the way for increased adoption and reliance on secure digital services across the EU.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Together, we can contribute to a more secure and inclusive digital future.

This Month in Digital Identity — October Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


Thales Group

Empowering drones to enhance naval operations: Thales successfully demonstrates latest innovations during Portuguese Naval military exercise

Empowering drones to enhance naval operations: Thales successfully demonstrates latest innovations during Portuguese Naval military exercise prezly Wed, 10/02/2024 - 09:00 For the third year in a row, Thales, worldwide leader in underwater and surface warfare systems, took part in the annual REPMUS1 military exercise hosted in Portugal by the Portuguese Navy, where NATO members we
Empowering drones to enhance naval operations: Thales successfully demonstrates latest innovations during Portuguese Naval military exercise prezly Wed, 10/02/2024 - 09:00 For the third year in a row, Thales, worldwide leader in underwater and surface warfare systems, took part in the annual REPMUS1 military exercise hosted in Portugal by the Portuguese Navy, where NATO members were invited to participate. During this military at-sea exercise, thanks to the sponsorship of the Portuguese, British and French Navies, Thales leveraged its expertise in unmanned systems and digital technologies to unleash the potential offered by both air and surface drones. Over the years, Thales has demonstrated its capacity to deliver cutting-edge solutions to navies across the world, ensuring a high level of responsiveness and flexibility in anti-submarine warfare, anti-mine warfare, and anti-air warfare detection.
©PolaRyse

At the REPMUS 2024 exercise, Thales showcased its highly innovative solutions in the field of unmanned systems for naval warfare during real-world conditions. Through its multi-drone management solutions, smart sensors and combat management systems specifically designed for unmanned platforms, Thales is supporting navies to develop new concepts of operations, teaming manned and unmanned systems to carry out naval missions with greater responsiveness and flexibility.

Unleashing the full potential of unmanned systems

From 9-27 September 2024, Thales demonstrated its latest innovations to support concept development and experimentation with new technologies for sponsors in the Portuguese, British and French navies. These span unmanned systems designed for all types of missions, both in the underwater (anti-submarine barriers, critical infrastructure protection and anti-mine warfare), and above water (maritime surveillance and anti-surface warfare). Equipped with high-performance smart sensors and systems, various air and surface drones from Thales and its partners Schiebel and Tekever carried out effective surveillance, tracking and protection missions. An advanced command and control system enabled these drones to achieve their mission efficiently but also operate collaboratively and autonomously, under the control of an operator.

“In recent years, drones have proved to be a game-changer for armed forces. To support them, we bring our unique expertise in drone systems, sensors, communications, digital technologies and Artificial Intelligence, for air, as well as above and underwater. Our teams mobilise all their talent to develop innovative systems that exploit the full potential offered by drones for the benefit of navies, whose missions are evolving rapidly and are increasingly carried out in coalition. This was brilliantly demonstrated during the REPMUS exercises organized by the Portuguese Navy, with the invaluable support of our other partner navies and OTAN invited countries.” Philippe Duhamel, EVP Defence Mission Systems, Thales.

REPMUS demonstrations enabled Thales and partners to:

Highlight the value of modular, multi-mission unmanned systems that comprise unmanned vehicles operating with a range of payloads, integrated with different Command and Control systems to achieve naval mission objectives. Test and prove the effectiveness of detection, identification and monitoring of threats by collecting and processing, in real time, data from sensors on-board the various unmanned systems with the help of Artificial Intelligence (AI) while guaranteeing overall coherence with the Combat Management System. Successfully federate several air and surface drones, automatically creating mission plans, tasking and ensuring complete supervision. Increase projection capability through communication systems integrated into unmanned aerial systems to relay operational data, enabling greater responsiveness, a reduction in human resources, costs and environmental footprint, and to demonstrate supervision systems deployed on surface ships. Experiment with the Royal Navy additional surveillance missions capacities for RWUAS (Multi-Mission Rotary Wing Unmanned Air System) enhancing the system with new capabilities in smart sensors and combat management systems. Confirm compatibility with NATO interoperability standards, currently under development

Empowering drones through Artificial Intelligence (AI)

The use of AI-boosted sensors and systems improves the ability to detect, identify and track threats in real-time using data collected by sensors. For instance, AI algorithms were used to automatically detect abnormal behaviour in the positioning and trajectory of ships, to analyse images captured by sensors on the Unmanned Surface Vessel (USV), to identify any unusual activities or patterns that may indicate a security threat. This empowerment of drones enables a higher volume of data to be treated as well as continuous situation monitoring, thus saving the armed forces valuable time.

While aerial and maritime drones are undoubtedly valuable assets, they require multiple operators and extensive coordination for effective control. By leveraging AI, Thales increases autonomy among its unmanned systems, enabling them to automatically raise an alert, formulate mission plans and even select the most suitable drone among a fleet as soon as a potential threat is detected. This enables operators to supervise missions in real time. An interface integrates the entire drone fleet and operators can take control of drone systems at any time, keeping humans at the centre of the decision-making process.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

Artificial Intelligence at Thales

Thales is a key player in trusted, cyber-secure, transparent, explainable, and ethical AI, serving the armed forces, aircraft manufacturers, and critical infrastructure operators. The Group employs over 600 engineers specializing in AI, and around 100 PhD students are conducting their research in this field.

Organized within Thales's AI accelerator, including AI Lab for research, AI Factory for systems (including decision support systems), and AI Sensors (for sonar, radar, radios, and optronics), these experts contribute to integrating AI into more than a hundred of Thales's products and services. Leveraging state-of-the-art sensor and system technologies, Thales's AI capabilities meet the full range of needs in defence, space, aerospace, cybersecurity, and digital identity sectors. Trusted AI addresses the security and sovereignty requirements of Thales's clients, enhancing data analysis and human decision-making, accelerating the detection, identification, and classification of objects or scenes of interest, while considering specific constraints such as cybersecurity, deployability, and efficiency in critical environments.In 2023, Thales ranked first in Europe for AI patent filings related to critical systems.

1 The 2024 REPMUS edition is co-organized by the Portuguese Navy, Faculty of Engineering of the University of Porto (FEUP), NATO Science and Technology Organization Centre for Maritime Research and Experimentation (NATO STO CMRE), NATO Maritime Unmanned Systems Initiative (NATO MUSI) and European Defence Agency (EDA).

/sites/default/files/prezly/images/Design%20sans%20titre%20%2823%29.png Documents [Prezly] Empowering drones to enhance naval operations Thales successfully demonstrates latest innovations during Portuguese Naval military exercise.pdf Contacts Camille Heck, Thales, Media Relations Land & Naval Defence Alice Pruvot, Head of Media Relations, Aeronautics & Defense 02 Oct 2024 Type Press release Structure Defence and Security Defence At the REPMUS 2024 exercise, Thales showcased its highly innovative solutions in the field of unmanned systems for naval warfare during real-world conditions. Through its multi-drone management solutions, smart sensors and combat management systems specifically designed for unmanned platforms, Thales is supporting navies to develop new concepts of operations, teaming manned and unmanned systems to carry out naval missions with greater responsiveness and flexibility. prezly_694265_thumbnail.jpg Hide from search engines Off Prezly ID 694265 Prezly UUID 4970e572-56c8-4ad4-917a-0a450f1485b3 Prezly url https://thales-group.prezly.com/empowering-drones-to-enhance-naval-operations-thales-successfully-demonstrates-latest-innovations-during-portuguese-naval-military-exercise Wed, 10/02/2024 - 11:00 Don’t overwrite with Prezly data Off

TBD

Known Customer Credential Hackathon

Participate in this hackathon to issue a Known Customer Credential and streamline KYC across payment apps.

tbDEX is an open messaging protocol that enables liquidity seekers to connect with liquidity providers. This means that as a liquidity provider, your business can be the backend supplier in several payment applications.

Performing KYC on repeat customers every time they attempt to transact with you from a different payment app would be a pain. To avoid this, you will use the Web5 SDK to issue a Known Customer Credential (KCC) to a customer, Alice, who you have already completed KYC on. You will store the JWT representing the KCC in Alice’s Decentralized Web Node so that she can present it to your business from any payment app.

Challenge Create a Decentralized Identifier (DID) and DWN to use as the Issuer. Bonus: Use the DIF community DWN instance hosted by Google Cloud. Issue Alice a KCC that includes evidence. Note that for this challenge, you do not need to implement an actual identity verification flow. Install the VC Protocol onto your DWN so that you can communicate with Alice’s DWN. Obtain permission to write to Alice’s DWN by sending a GET request to: https://vc-to-dwn.tbddev.org/authorize?issuerDid=${issuerDidUri}
Store the VC JWT of the KCC as a private record in Alice’s DWN. Submit

To enter a submission for this hackathon, provide the DWN Record ID of the KCC.

Resources Alice’s DID: did:dht:rr1w5z9hdjtt76e6zmqmyyxc5cfnwjype6prz45m6z1qsbm8yjao web5/credentials SDK web5/api SDK How to create a DID and DWN with Web5.connect() Obtain Bearer DID - required to sign KCC Known Customer Credential Schema How to issue a VC with Web5 Example of issuing a KCC with Web5 Example of issued KCC How to install a DWN Protocol How to store a VC in a DWN Contact Us

If you have any questions or need any help, please reach out to us in our #kcc-hackathon channel on Discord.

Tuesday, 01. October 2024

IdRamp

Account Takeover in Healthcare: How to Deliver Security and Trust

Recent warnings from the U.S. Department of Health and Human Services highlight the alarming surge in ATO incidents targeting healthcare and public health organizations The post Account Takeover in Healthcare: How to Deliver Security and Trust first appeared on Identity Verification Orchestration.

Recent warnings from the U.S. Department of Health and Human Services highlight the alarming surge in ATO incidents targeting healthcare and public health organizations

The post Account Takeover in Healthcare: How to Deliver Security and Trust first appeared on Identity Verification Orchestration.

KuppingerCole

Transforming Access Management: Strategies for the New Digital Landscape

In today's rapidly evolving digital landscape, organizations face increasing complexity in managing application access. The proliferation of diverse applications, coupled with the end-of-life (EOL) for traditional solutions like Oracle and SAP GRC, necessitates a reevaluation of access governance strategies. Traditional methods often fall short in addressing these challenges, requiring a shift tow

In today's rapidly evolving digital landscape, organizations face increasing complexity in managing application access. The proliferation of diverse applications, coupled with the end-of-life (EOL) for traditional solutions like Oracle and SAP GRC, necessitates a reevaluation of access governance strategies. Traditional methods often fall short in addressing these challenges, requiring a shift towards more comprehensive and integrated approaches.

Modern technology offers innovative solutions to these issues. Organizations must adopt tools that support a wide range of applications, ensuring seamless integration and consistent delivery of access governance. By embracing these advanced solutions, businesses can achieve fine-grained entitlement management and enhance overall security posture.

Martin Kuppinger, Principal Analyst at KuppingerCole, will discuss the changing landscape of Application Access Governance and Application Risk Management. He will explore the convergence with Identity Governance and Administration (IGA), examine various scenarios, and evaluate the applicability of different types of solutions.

Vinit Shah, VP Product Management at Saviynt, will address the specific challenges organizations face with their governance programs. He will highlight the need for fine-grained entitlement management and discuss the unique strengths of Saviynt's solution in delivering consistent governance across diverse applications.




FindBiometrics

Socure Teams with Dock on AI-secured Decentralized Identity

Socure and Dock have announced a partnership that could have a significant impact on the digital ID landscape. That’s because each company is a significant player in different areas of […]
Socure and Dock have announced a partnership that could have a significant impact on the digital ID landscape. That’s because each company is a significant player in different areas of […]

SC Media - Identity and Access

Funding round brings in $20.5M for Apono

The company's AI-based platform helps manage access rights for cloud applications and enterprise databases, serving numerous clients, including Fortune 500 companies like Hewlett-Packard Enterprise and Jasper.AI.

The company's AI-based platform helps manage access rights for cloud applications and enterprise databases, serving numerous clients, including Fortune 500 companies like Hewlett-Packard Enterprise and Jasper.AI.


Okta expands capabilities of free Auth0 product

Key updates include a stronger Auth0 free plan with up to 25,000 active users, passwordless login, and custom domain support.

Key updates include a stronger Auth0 free plan with up to 25,000 active users, passwordless login, and custom domain support.


Identity-First IR solution by Silverfort launches

The solution uses machine learning and artificial intelligence to identify and freeze compromised identities, preventing further spread within a network.

The solution uses machine learning and artificial intelligence to identify and freeze compromised identities, preventing further spread within a network.


FindBiometrics

ID Tech Digest – October 1, 2024

Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Further EES Delays ‘Cannot Be Completely […]
Welcome to ID Tech’s digest of identity industry news. Here’s what you need to know about the world of digital identity and biometrics today: Further EES Delays ‘Cannot Be Completely […]

SC Media - Identity and Access

Unsecured WordPress folder exposes ChoiceDNA records

Included in the records stored in the unsecured WordPress folder named "Facial Recognition Uploads" were names, biometric images, phone numbers, racial or ethnic identities, email addresses, and reasons for facial DNA analysis.

Included in the records stored in the unsecured WordPress folder named "Facial Recognition Uploads" were names, biometric images, phone numbers, racial or ethnic identities, email addresses, and reasons for facial DNA analysis.


FindBiometrics

JP Morgan VP Maps mDL Growth Around Digital Trust Service

A senior JP Morgan Chase executive is highlighting the growing importance of mobile driver’s licenses. In a new post on his blog “Demystify Biometrics”, Ashok Singal notes that mobile driver’s […]
A senior JP Morgan Chase executive is highlighting the growing importance of mobile driver’s licenses. In a new post on his blog “Demystify Biometrics”, Ashok Singal notes that mobile driver’s […]

Acuity Previews Government-focused Biometric Identity Report

Acuity Market Intelligence’s latest Prism report of 2024, focused on biometric and identity technology in the public sector, is now available to download as a preview. When released, the “Biometric […]
Acuity Market Intelligence’s latest Prism report of 2024, focused on biometric and identity technology in the public sector, is now available to download as a preview. When released, the “Biometric […]

Biometrics to Play Key Role in South Africa Home Affairs Digital Upgrade

The South African Department of Home Affairs, led by Minister Dr. Leon Schreiber, has unveiled a five-year plan to transition to a fully digital-first department by 2029. The “Home Affairs […]
The South African Department of Home Affairs, led by Minister Dr. Leon Schreiber, has unveiled a five-year plan to transition to a fully digital-first department by 2029. The “Home Affairs […]

Mastercard Upgrades ‘Identity Check’ Security for South Africans

Mastercard has unveiled improvements to its Identity Check program, aimed at enhancing security for South African cardholders during online transactions. The upgraded system incorporates the 3-D Secure 2.0 protocol, adding […]
Mastercard has unveiled improvements to its Identity Check program, aimed at enhancing security for South African cardholders during online transactions. The upgraded system incorporates the 3-D Secure 2.0 protocol, adding […]

Trinidad and Tobago to Implement e-Passports, Digital Travel Forms

Trinidad and Tobago will soon implement e-passports and digital embarking/disembarking forms, as announced by Finance Minister Colm Imbert during the budget presentation on September 30. The transition from machine-readable passports […]
Trinidad and Tobago will soon implement e-passports and digital embarking/disembarking forms, as announced by Finance Minister Colm Imbert during the budget presentation on September 30. The transition from machine-readable passports […]

Further EES Delays ‘Cannot Be Completely Excluded’: European Commission

New details are emerging about the difficulties that some countries are having in implementing the European Union’s planned biometric border system, along with further hints that its activation date will […]
New details are emerging about the difficulties that some countries are having in implementing the European Union’s planned biometric border system, along with further hints that its activation date will […]

Indicio

New industry report highlights Indicio’s masterful innovation in biometric digital identity for travel and hospitality sectors

The post New industry report highlights Indicio’s masterful innovation in biometric digital identity for travel and hospitality sectors appeared first on Indicio.
Analyst firm Acuity Market Intelligence’s The Prism Project reports that the market for biometric digital identity in travel is expected to grow at a compound annual growth rate of 92% and generate over $72 billion dollars globally by 2028. We take a look at key points from the report, how the industry is growing, and the next steps with decentralization.

By Tim Spring

Biometrics and digital identity 

As travelers increasingly expect to be able to do almost anything from the comfort of their home and the convenience of their smartphones, biometrics and digital identity are central to meeting these expectations of seamless digital travel.

To make this seamless world a reality requires a single digital identity that will work across platforms and unify the traveler’s journey from airport to destination and back again — and be capable of integrating ancillary travel and tourist services.

In a way, the technology goal is similar to how it is possible to login to different websites using a federated identity, such as a Google account. But it differs in two important aspects: One, this digital identity is derived from government systems of record, such as a passport, and not a third-party identity provider; and two, you control and store this identity and the personal data associated with it, and not a third-party identity provider.

These features are critical for privacy and privacy compliance (the traveler always has the power of consent to sharing data) and security (removing the centralized storage of personal data, especially biometric data, removes the risk of mass data breaches, identity fraud, and catastrophic loss of trust).

The emergence of technology solutions that meet these requirements is explored in the new 2024 Biometric Digital Identity Travel and Hospitality Prism Report from Acuity Market Intelligence. The report, which first launched in 2023, analyzes the state of the solution market and sets out an evaluative framework for what is working best to deliver seamless travel. In sum, it is technology that “puts human beings first,” namely:

Digital identity belongs to the user it describes. True ID empowerment relies on government systems of record. Identity must be consistently and continuously orchestrated to remain secure. Biometrics must be at the core of any sustainable digital identity ecosystem.

“By investing in biometric digital identity solutions like those identified in [the report], travel and hospitality stakeholders will find measurable benefits—from improved guest flow, to bulletproof compliance, to secure loyalty programs. But beyond the immediately tangible results, participating in the biometric digital identity ecosystem has a wider, global effect.”

The report highlights the work of  Indicio and its partner SITA, for developing solutions that “masterfully deploy biometric guest experiences around the globe.”

Indicio and SITA created the first successfully deployed Digital Travel Credential for seamless border crossing. By using Verifiable Credential technology, travelers were able to turn their passports into “government-grade” digital identities for instant, frictionless, authentication. A key feature of the Indicio-SITA credential solution is the ability to bind the biometrics in a passport to the rightful owner of that passport. This, in effect, created a two-factor biometric authentication without the need for airports or airlines to store biometric data.

To learn more about “bring your own biometrics,” and how Verifiable Credentials enable seamless data sharing, contact us for a demo — or book a free workshop where we’ll analyze your use case.

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post New industry report highlights Indicio’s masterful innovation in biometric digital identity for travel and hospitality sectors appeared first on Indicio.


FindBiometrics

Malaysia’s Digital ID Becomes Mandatory for Driver Services

Starting October 10, 2024, the Malaysian Road Transport Department (JPJ) will make it mandatory for users to log into its MyJPJ app using MyDigital ID, a single sign-on (SSO) system […]
Starting October 10, 2024, the Malaysian Road Transport Department (JPJ) will make it mandatory for users to log into its MyJPJ app using MyDigital ID, a single sign-on (SSO) system […]

Belize Police May Seek New Facial Recognition Vendor Over Data-sharing Issue

The Belize Police Department may soon be in the market for a new facial recognition system. According to local reports, the Department had initially planned to work with Biometrica, a […]
The Belize Police Department may soon be in the market for a new facial recognition system. According to local reports, the Department had initially planned to work with Biometrica, a […]

PingTalk

How to orchestrate risk and fraud services into user journeys

Ping’s orchestration capabilities and PingOne Protect are the tools your need to orchestrate user journeys with context and risk.

Imagine a digital world where every user experience is smooth, secure, and seamless. No more clunky logins or frustrating security hoops—just pure, uninterrupted interaction. At Ping, we’re on a mission to make this vision a reality. We know that as our valued customers, you understand the challenges of managing diverse digital journeys and integrating risk services. That's why our cutting-edge orchestration solutions transform these challenges into your organization’s greatest strengths. Here’s how Ping can help you orchestrate user journeys that leverage context, risk signals, and all your many risk and fraud investments. Hopefully, you already have either PingOne DaVinci or Intelligent Access via PingOne Advanced Identity Cloud or PingAM. If you aren’t utilizing orchestration yet, you can learn more here.

Monday, 30. September 2024

SC Media - Identity and Access

Researchers hacked Kia cars armed with only license plate numbers

A team of security researchers discovered a vulnerability that allows for Kia cars to be remotely compromised with nothing more than a license plate number.

A team of security researchers discovered a vulnerability that allows for Kia cars to be remotely compromised with nothing more than a license plate number.


Louisiana accounting firm breach affects 127,000 customers

Midsize accounting firm takes nearly a year to notify customers of a breach.

Midsize accounting firm takes nearly a year to notify customers of a breach.


auth0

The Curious “Case” of the Bearer Scheme

A wrong interpretation of the OAuth specifications can lead to hours of debugging and headaches. Learn the details to avoid them.
A wrong interpretation of the OAuth specifications can lead to hours of debugging and headaches. Learn the details to avoid them.

Caribou Digital

Traditional evaluation looks backward; innovation looks forward.

Traditional evaluation looks backward; innovation looks forward. How do we evaluate innovation in real time? Written by Elise Montano and Niamh Barry on the Measurement & Impact team at Caribou Digital. Innovation programs cultivate an environment of experimentation and continuous improvement in developing, implementing, and scaling new ideas, products, or processes to drive growth
Traditional evaluation looks backward; innovation looks forward. How do we evaluate innovation in real time?

Written by Elise Montano and Niamh Barry on the Measurement & Impact team at Caribou Digital.

Innovation programs cultivate an environment of experimentation and continuous improvement in developing, implementing, and scaling new ideas, products, or processes to drive growth. These programs depend on rapid, actionable insights to stay ahead and be ready to pivot strategies and optimize outcomes in real time. However, many traditional evaluation approaches are neither responsive nor adaptive to the speed and focus of insights needed in innovation programs.

In Caribou Digital’s work with Mastercard Strive, we sought opportunities to break away from traditional evaluation models to try a new approach that retained values of timeliness, flexibility, agility, and rigor, with a clear understanding of real-world constraints. From this experience, we devised an evaluative approach to support programs working in dynamic systems to generate impactful and incisive insights that enhance performance and impact.

Traditional evaluation is failing innovation programs.

Evaluations are usually conducted at predetermined moments in a program — for example, mid- or endpoint — rather than when stakeholders need information. Such evaluations are focused on pre-established questions and do not respond to program stakeholders’ dynamic and complex insights needs. They typically focus on accountability and documenting processes, not learning and improving program performance.

Innovative programs need real time information that supports dynamic learning, rapid response, and experimentation for continuous improvement. Traditional evaluation approaches are simply too rigid and fail to address these core needs.

Organizations that deliver complex programs need a better way of getting incisive insights at critical moments while maintaining evaluative rigor.

Our modular evaluation approach works with innovation programs.

Building on formative and developmental evaluation principles, we developed a flexible and agile approach to generating evaluative insights within the Mastercard Strive program. We call this “modular evaluation.” The characteristics of this approach include:

Embedded: Work is led and conducted by evaluation specialists immersed in the program delivery.
>> The Caribou Measurement and Impact team is part of Mastercard Strive program delivery, working daily with program directors, grantees, and partners. We used our detailed knowledge of the program and its complexities, constraints, and learning objectives in evaluations`. Modular: Evaluations are conducted in thematic modules that enable faster, more focused, and concise work.
>> We deployed three thematic modules — 1) small business outcomes, 2) program strategy and governance, and 3) partner management — allowing us to focus entirely on each module in turn. Flexible deployment: Evaluations are delivered as and when insights are needed to support strategic decision-making, not according to a prescribed timeline.
>> We delivered the partner management module with our first phase of programs before developing a second phase so the insights from one could be rolled into the next. We also conducted our small business outcomes module twice, nine months apart, to generate insights when grantees had the most data available. Lean: Evaluations focus only on pertinent questions and data collection methods. They enhance existing data collected through regular reporting with lean data collection where it counts.
>> For each module, we used grantee data from existing reports and filled the information gaps through focused interviews.

The benefits of this approach were immediately evident to our team and clients. We lined up evaluation modules to deploy throughout the project to provide insights at the moment they had the most strategic value.

We identified five key outcomes of this approach based on our experience.

1. Modular evaluations enable precision and flexibility, supporting insights at decisive times.

Our approach acknowledges that some modules or topics may require faster, more focused, and more concise work, or have different internal and external stakeholders reliant on insights. Each module can be managed independently, with its own evaluation questions and analytical frameworks, according to a timeline that best supports decision-making.

In Mastercard Strive, our small business outcomes module was adapted based on the outcomes expected at specific points. For example, the first iteration delivered insights on the impact of strategies for engaging small businesses with various solutions. It suggested where pivots could support deeper engagement and what other types of programs would address gaps in our portfolio. The second iteration — conducted nine months later — assessed early outcomes from our first phase of grantees (e.g., on small business capabilities and uptake of new business practices, products, and services) and revisited solution engagement data to incorporate new results and grantees. Future outcomes modules toward the end of the program will look at long-term outcomes and the sustainability of impacts for small businesses.

Mastercard Strive small business outcomes evaluation module 2. Focused modules support rapid delivery of insights.

Each evaluation module took at most three months to complete, and interim insights were often available within a month of launching data collection. In contrast, traditional evaluations can often take over six months to deliver final insights. Collecting and combining data across multiple themes from a wide range of sources adds complexity to the process of analyzing and presenting that data. In addition to lean data collection, agile approaches allow evaluators to focus on specific topics, dig into the details, and identify more nuanced and detailed insights.

3. Rapid insights support adaptive strategies.

Access to real time learning enables grant and fund managers to be dynamic and responsive, and make evidence-based decisions by working with our measurement and impact team. Our granting strategy evaluation module built on insights gleaned through ad hoc meetings and reporting, leading to a quick — but structured — approach to collecting and analyzing primary data. Within four weeks, our team had mapped the strengths and weaknesses of the granting and grantee management processes. We delivered concise recommendations that immediately fed into our second granting phase, including how we selected, developed, and managed programs.

4. Flexible timing and focused modules support stakeholder recall.

Traditional evaluations often interview stakeholders once on a wide range of topics, making for unwieldy interviews that ask questions about decisions made over a year before. A more flexible approach allowed our teams to conduct shorter, more focused interviews with stakeholders. The interviews were concise, asked questions about recent decisions, and allowed participants to prepare more effectively.

5. Modular evaluations are more cost-efficient.

We found this evaluative approach more cost-efficient than traditional evaluations for three reasons. First, the rapid, iterative nature of modular evaluations supports learning and continuous improvement that reveals opportunities for experimentation and adaptation earlier on, avoiding costly mistakes. Second, modular evaluations are inherently lean. Data collection builds on existing knowledge and is respectful of participants’ time, giving them clear boundaries of the scope of each module. Evaluation teams are embedded within the programs and don’t need to spend time learning about programs’ context. Finally, the modular nature of this evaluation supports scalability. Program managers have flexibility on what is included and how much budget they are willing to dedicate to evaluations, ensuring that each module delivers adequate value for money.

Deploying modular evaluations in innovation programs

Modular evaluations are distinguished from ongoing monitoring or measurement. While optimizing the insights from monitoring systems, these are coupled with rigorous evaluative approaches and question how and why a particular outcome has been observed. To deploy modular evaluations, organizations require budget flexibility, an embrace of uncertainty about evaluation timing and focus, and a team that is open and supportive of real time learning.

At Caribou Digital, we’ve seen the value obtained from flexible innovation-supportive approaches and are excited to promote a method that works with and for technology-focused innovation programs. We continue to deploy modular evaluations in our work and collaborate with others who are similarly interested in ensuring that evaluations are candid, purposeful, and timely. If you are interested in this approach, please contact us at Elise Montano or Niamh Barry.

Traditional evaluation looks backward; innovation looks forward. was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Zero Trust Beyond Identity: A Holistic Approach to Cybersecurity

by Alejandro Leal The Zero Trust security model is designed to enhance cybersecurity by eliminating inherent trust within networks and requiring strict verification for every access request. However, Zero Trust is a multifaceted cybersecurity strategy that extends far beyond simple identity verification. While confirming user identity is essential, focusing solely on this aspect overlooks the dep

by Alejandro Leal

The Zero Trust security model is designed to enhance cybersecurity by eliminating inherent trust within networks and requiring strict verification for every access request. However, Zero Trust is a multifaceted cybersecurity strategy that extends far beyond simple identity verification. While confirming user identity is essential, focusing solely on this aspect overlooks the depth and breadth of what Zero Trust truly encompasses.

To strengthen their defenses against increasingly sophisticated cyber threats, organizations must adopt a comprehensive Zero Trust strategy that secures all facets of their digital environment. By integrating robust measures across identity, devices, networks, applications, and data, coupled with analytics and automation, organizations can achieve a more resilient and proactive cybersecurity posture.

Here’s how these fundamental aspects—users, devices, applications, networks, data, visibility, and automation—play a critical role in a Zero Trust strategy:

1. Users: Zero Trust security starts with stringent identity verification. Techniques like multi-factor and continuous authentication ensure that only authorized users access resources, minimizing insider threats and unauthorized access.

2. Devices: All devices must be secured and continuously monitored. This includes compliance checks, real-time device inspection, assessment, and patching to ensure that devices accessing the network are not compromising security.

3. Systems and Applications: Protecting systems and applications involves implementing advanced measures such as software risk management, application inventory, and continuous monitoring for vulnerabilities and anomalies.

4. Networks: Focusing on granular policy, real-time access decisions, and segmentation strategies such as micro-segmentation helps control access and prevent lateral movement within networks, a critical strategy to isolate and contain threats.

5. Data: Zero Trust necessitates rigorous data protection measures such as encryption and access controls to ensure data integrity and confidentiality, crucial for compliance and security. Organizations can also adopt more advanced techniques, such as data loss prevention and data monitoring and sensing.

6. Visibility and Analytics: Comprehensive monitoring across networks and systems helps detect anomalies and potential threats, providing the necessary insights to preemptively address security issues.

7. Automation and Orchestration: Streamlining responses to security events through automation and orchestration reduces response times and enhances security operations, making threat detection and mitigation more efficient.

For organizations implementing Zero Trust, it's essential to integrate these elements into a cohesive strategy that aligns with Zero Trust principles, adapting over time to meet the dynamic nature of cyber threats. This holistic approach not only enhances security but also supports operational efficiency and compliance across all organizational levels.

Join us in December in Frankfurt at our cyberevolution conference, where we will be discussing zero trust in more detail.

Take a look at some of the sessions on Zero Trust:

CISA Zero Trust Maturity Model PANEL: Zero Trust in Practice: Challenges and Success Stories Beyond the Now: Examining Emerging Trends in the Cybersecurity Landscape

Cloud Backup for AI Enabled Cyber Resilience

by Mike Small This Leadership Compass provides a roadmap for organizations navigating the evolving landscape of cloud backup and cyber resilience, highlighting how AI and machine learning are transforming data protection strategies. As society becomes more digitally dependent and cyber threats become increasingly sophisticated, the need for backup and resilience solutions has never been greater. T

by Mike Small

This Leadership Compass provides a roadmap for organizations navigating the evolving landscape of cloud backup and cyber resilience, highlighting how AI and machine learning are transforming data protection strategies. As society becomes more digitally dependent and cyber threats become increasingly sophisticated, the need for backup and resilience solutions has never been greater. This report not only identifies the top vendors in the market but also delves into the innovative technologies driving the next generation cyber resilience. It provides evaluations of leading solutions with an emphasis on regulatory compliance. This report is an essential guide for organizations looking to increase their cyber resilience.

PingTalk

Challenges in Preparing Ecommerce Channels for the Peak Season Rush

The ecommerce peak season rush is around the corner. Here's how to prepare to improve conversions, wow your customers, and keep fraudsters at bay.

Sunday, 29. September 2024

KuppingerCole

Leading the Cyber Charge: Insights from the CEO and CISO Office

Matthias invited KuppingerCole CEO Berthold Kerl and CISO Christopher Schütze to discuss the relationship between the CEO and the CISO in integrating cybersecurity into the company's business strategy. They highlight the key challenges faced by CEOs in integrating cybersecurity, the importance of communication between the CISO and the board, and the role of regulatory compliance. They also discuss

Matthias invited KuppingerCole CEO Berthold Kerl and CISO Christopher Schütze to discuss the relationship between the CEO and the CISO in integrating cybersecurity into the company's business strategy. They highlight the key challenges faced by CEOs in integrating cybersecurity, the importance of communication between the CISO and the board, and the role of regulatory compliance. They also discuss the need to balance cutting-edge cybersecurity solutions with cost considerations and the trends to look out for in the coming years, such as AI-driven security and supply chain security.




Spherical Cow Consulting

Operationalizing Trust Frameworks: Who’s Going to Keep the Lights On?

Given my recent posts on digital wallets and the future of academic identity federation, you might be able to tell I’m on a bit of a rant. These topics share a common thread: we have a lot of experience building trust frameworks but significantly less experience in operationalizing those trust frameworks and making them sustainable.… Continue reading Operationalizing Trust Frameworks: Who’s Going

Given my recent posts on digital wallets and the future of academic identity federation, you might be able to tell I’m on a bit of a rant. These topics share a common thread: we have a lot of experience building trust frameworks but significantly less experience in operationalizing those trust frameworks and making them sustainable.

What’s a Trust Framework?

Backing up a bit, let’s discuss a trust framework in this post’s context. According to NISTIR 8149, a trust framework is “the ‘rules’ underpinning federated identity management, typically consisting of system, legal, conformance, and recognition.” It’s about applying a whole set of technical and governance rules to the protocols, contracts, and regulations that let you use a digital identity from one organization to sign-in to another organizations services. A trust framework the backbone of how federated identity functions effectively—at least in theory.

For those who care about ensuring safe and effective interoperability, though, the rules defined in a trust framework are critical and everyone should apply them in their federations. Operationalizing trust frameworks means doing more than ‘just’ defining the policies and rules (as if that’s not hard enough). It also means creating the practical mechanisms and governance structures to make those rules measurable, enforceable, and part of daily operations. It means moving from theoretical planning to real-world execution, with a way to know when the frameworks are being correctly applied and when an entity is out of conformance.

The Funding Crisis Nobody Wants to Talk About

This is where the real problem lies. Who funds this infrastructure? A trust framework involves a ridiculous number of different organizations. They are all supported in various ways, and the identity federation part of their services is rarely the primary reason they exist. (University IdPs are not the reason that universities exist. Identity and access management services are not why publishers sell journal subscriptions.)

The federation operators themselves are often underfunded and overstretched. Out of all the global federations, only a handful have the resources to innovate. The rest? They’re in survival mode—keeping old systems running and sticking with SAML because it works well enough. They can’t afford to migrate. How can they require their federation members to pay to comply with a trust framework when the benefits are intangible to their core missions?

Lessons from the R&E Space: We Can’t Just Ignore the Underfunded Parts

The worlds of commerce and government, while buzzing about digital wallets and verifiable credentials, need to wake up to the realities that the R&E federations have lived with for decades. Trust isn’t just a tech problem; it’s a governance problem, a funding problem, a sustainability problem. Right now, too many organizations are excited about issuing credentials without thinking about how to manage them when things go wrong.

The Research and Education (R&E) federations have been there, done that, and frankly, are still wondering if it’s worth doing again. They’ve experienced the growing pains that come with scaling trust across borders and organizations, but they’re also exhausted—financially and operationally. It’s not that they don’t want to help; they can’t afford to.

R&E federations have the trust frameworks. They don’t have the resources necessary to operationalize those frameworks in a way that reaches all parties involved.

What’s the Future for Trust Frameworks?

So, where does that leave us? If we want federated identity to work sustainably across sectors and borders, we need to figure out the support model(s). We need a governance structure that doesn’t just sound good in theory but works in practice without requiring federations to burn themselves out.

And yes, some of this may come from government backing. However, we also need to think about models that work where government involvement isn’t the answer—where decentralized, community-driven approaches, like REFEDS SIRTFI for incident response, are more appropriate. We need to build bridges between these different types of frameworks and find a way for them to coexist, or else we’re just going to keep reinventing the wheel.

Ultimately, operationalizing trust frameworks is about more than technology or policies. It’s about ensuring that the people running the systems have the support they need, that the lights stay on, and that we don’t lose trust simply because we can’t afford to maintain it. The R&E sector has valuable lessons to offer, but without a more collaborative and well-funded approach, the rest of the identity world might find itself learning those same lessons the hard way.

A Call to Action Without All the Answers

I recognize that I’m shouting about a problem for which I don’t have an answer. But that’s exactly why I’m getting all rant-y about this. I hope we can collectively develop more effective ideas—better than the grassroots community efforts of the past—so that every organization involved finally recognizes the infrastructure underpinning our trust frameworks as critical. This effort isn’t just about keeping federated identity afloat; we must support, value, and encourage it to evolve, so it can deliver on its promise for the long haul.

The post Operationalizing Trust Frameworks: Who’s Going to Keep the Lights On? appeared first on Spherical Cow Consulting.

Thursday, 26. September 2024

KuppingerCole

How to Build a Modern Approach to Identity Governance in a SaaS first-World

In today's tech landscape, the shift towards distributed software environments and diverse access standards has transformed identity governance into a complex maze. Our upcoming webinar, "How to Build a Modern Approach to Identity Governance in a SaaS-First World", addresses the challenges and solutions for managing identities and access in cloud-based SaaS environments. Modern technology offers

In today's tech landscape, the shift towards distributed software environments and diverse access standards has transformed identity governance into a complex maze. Our upcoming webinar, "How to Build a Modern Approach to Identity Governance in a SaaS-First World", addresses the challenges and solutions for managing identities and access in cloud-based SaaS environments.

Modern technology offers innovative approaches to tackle these challenges. By leveraging advanced tools and methodologies, organizations can achieve complete visibility and control over SaaS applications. This ensures robust security, privacy, and compliance, while simplifying the management of user entitlements and access roles.

Warwick Ashford, Senior Analyst at KuppingerCole, will discuss the security, privacy, and compliance challenges associated with the growing use of cloud-based SaaS applications. He will explain why complete visibility and control of SaaS applications is essential and outline the key elements to achieving that goal.

Chaithanya Yambari, Co-Founder and CTO at Zluri, will share insights into modern identity governance strategies that provide real-time visibility and automated lifecycle management. He will cover how these strategies simplify audits, ensure compliance, and offer fine-grained access control within each application.




Elliptic

OFAC and FinCEN target major Russian money laundering services including Cryptex and PM2BTC

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against Cryptex–a crypto exchange registered in Saint Vincent and the Grenadines–due to its role in providing financial services to Russian cybercriminals, including receiving over $51.2 million in funds derived from ransomware attacks. OFAC has identified four cryptoasset addresses connected to this ex

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against Cryptex–a crypto exchange registered in Saint Vincent and the Grenadines–due to its role in providing financial services to Russian cybercriminals, including receiving over $51.2 million in funds derived from ransomware attacks. OFAC has identified four cryptoasset addresses connected to this exchange. Alongside OFAC’s action, FinCEN has issued an order designating PM2BTC–another crypto exchange associated with Russian illicit finance–as a “primary money laundering concern”. Sergey Sergeevich Ivanov, also sanctioned today, is associated with both entities.


liminal (was OWI)

How Market Monitor Helps Industry Leaders Stay Ahead with Timely, Actionable Insights

Staying ahead of the curve is no easy feat in today’s fast-paced digital landscape. Whether you’re in marketing, compliance, or product management, navigating the flood of information and identifying what truly matters is challenging. We’ve developed the Market Monitor, the latest feature of our Link platform, to help with just that challenge. By providing tailored […] The post How Market Monito
Staying ahead of the curve is no easy feat in today’s fast-paced digital landscape. Whether you’re in marketing, compliance, or product management, navigating the flood of information and identifying what truly matters is challenging. We’ve developed the Market Monitor, the latest feature of our Link platform, to help with just that challenge. By providing tailored insights and real-time competitive intelligence, Market Monitor helps professionals from diverse industries take proactive steps to outsmart competition, optimize performance, and make data-driven decisions. The Market Monitor filters the noise by leveraging our years of market research expertise to connect in-market events to themes and insights and then allowing personalized subscriptions to alerts.

To give you a glimpse into how Market Monitor can transform your day-to-day, let’s explore how different professionals leverage this powerful tool.

1. Alex – Marketing VP, Mid-Sized Vendor

“Market Monitor ensures that our campaigns target emerging fraud trends before the competition does.”

As a Marketing VP at a mid-sized cybersecurity vendor, Alex faces the challenge of keeping up with a constantly changing landscape of fraud prevention. With Market Monitor, Alex can quickly identify emerging trends that matter, from new threat vectors to competitor marketing shifts. Instead of sifting through irrelevant news, Market Monitor curates real-time insights that help Alex fine-tune campaigns, stay a step ahead of competitors, and measure the impact of her marketing efforts more effectively.

The Result: Alex’s team can quickly adapt strategies, driving higher engagement and revenue growth by focusing on what resonates most with customers.

2. David – CEO, Early-Stage Startup

“Market Monitor gives me the competitive edge to attract investors and launch our product with confidence.“

David, the CEO of an early-stage startup focused on AI-driven fraud detection, needs a clear understanding of the competitive landscape to secure funding. Market Monitor not only delivers insights into competitor strategies but also helps David identify potential investors who are showing interest in his sector. By using these insights to craft a compelling narrative, David can confidently approach investors and make data-backed decisions for his product’s go-to-market strategy.

The Result: David secures the resources and visibility needed to propel his startup toward success.

3. Maria – Product Manager, Cybersecurity Vendor

“With Market Monitor, I can build a product roadmap that’s aligned with real customer needs and market trends.“

As a Product Manager, Maria’s role demands a deep understanding of both customer pain points and competitor offerings. Market Monitor simplifies the process by delivering insights on customer sentiment, competitor product updates, and emerging technologies within the document verification space. Maria can now make more informed decisions about feature development and prioritize what resonates most with users.

The Result: Maria creates a product roadmap that drives adoption and innovation, ensuring her company stays ahead of the competition.

4. Ben – Biometrics Enthusiast & Career Seeker

“Market Monitor helps me stay updated on biometrics trends and explore new career opportunities.“

Ben is a biometrics professional who is eager to grow in the rapidly evolving field of cybersecurity. Market Monitor enables Ben to filter through the noise and access the most relevant news, industry insights, and even job opportunities specific to biometrics. With tailored updates, Ben can sharpen his expertise and confidently make his next career move.

The Result: Ben not only stays informed but also discovers new pathways for career growth in a field he’s passionate about.

5. Chris – Enterprise Risk Analyst

“I can evaluate fraud prevention solutions quickly and make data-driven recommendations with confidence.“

Chris, an Enterprise Risk Analyst at a large financial institution, is responsible for researching and recommending the best fraud prevention solutions. With Market Monitor’s real-time updates on new technologies, vendor comparisons, and industry best practices, Chris can streamline his research process. The tool’s ability to filter information by fraud type, deployment model, or budget allows Chris to make data-backed recommendations efficiently.

The Result: Chris minimizes fraud risk for his organization by staying ahead of emerging fraud technologies and selecting the right solutions.

Why Market Monitor Matters

In a world of overwhelming, disparate data, Link’s new Market Monitor ensures that you’re always one step ahead. Whether you’re adapting your marketing strategy, ensuring compliance, or navigating the competitive landscape, Market Monitor gives you the insights you need to act decisively. With tailored intelligence and real-time updates, professionals across industries use Market Monitor to make better, faster decisions and drive meaningful results for their organizations.

Are you ready to experience the power of Market Monitor for yourself? Try it now in Link and stay ahead of the market.

The post How Market Monitor Helps Industry Leaders Stay Ahead with Timely, Actionable Insights appeared first on Liminal.co.


IDnow

Time’s up: Urgent warning issued to all unlicensed gambling operators in Brazil.

The Brazilian Finance Ministry has set a new deadline for operators to apply for a license. Are you ready? Discussions regarding Brazil’s online gambling market have been ongoing since 2018, when the National Congress tasked the federal government with regulating the industry. Back then, it was hoped that new regulations would create a safer and […]
The Brazilian Finance Ministry has set a new deadline for operators to apply for a license. Are you ready?

Discussions regarding Brazil’s online gambling market have been ongoing since 2018, when the National Congress tasked the federal government with regulating the industry.

Back then, it was hoped that new regulations would create a safer and more transparent gambling environment, which would increase government revenue through taxation, protect players from fraud and promote responsible gambling practices. Many believed that such changes would also attract both local and international operators and transform Brazil into a major hub for the gambling industry in Latin America.

There have been numerous twists and turns since then, but in July 2024, Brazil’s regulatory process for online gambling, which included 10 detailed ordinances, was finally completed.

The new rules impose strict requirements on both local and international operators, including financial thresholds and rigorous background checks to ensure transparency and prevent money laundering. A mandatory fee structure for license applications was also established, ensuring that only financially stable and well-regulated companies could enter the market.

Read more about the other regulatory requirements, along with the multitude of challenges and opportunities facing gambling operators in Brazil in an interview with Ronaldo Kos, LATAM Gaming here.

Initially, companies had until August 2024 to apply for a license; those that were successful would be able to operate under the new bet.br domain from January 1, 2025; other gambling domains, including .com, would be blocked. It was announced that other licensing application windows would be announced in due time.

However, in mid-September, Brazil’s Finance Ministry announced that operators that had not applied for a license by midnight on September 30 would have to cease operations from October 1 until they had applied and received their permits. Any that continued to operate would be subject to having their websites blocked and fines of up to R$2 billion (US$354 million).

The two-week notice period was certainly a surprise for the industry. As gambling operators rush to get their applications submitted in time, it’s important they do things correctly and ensure they have the right technological stack in place, especially to comply with identity verification requirements.

Ronaldo Kos, LATAM Gaming at IDnow
All bets are off. Why the rush?

Ensuring all operators meet strict legal and ethical standards will strengthen the integrity of the sector and so is an admirable, yet sudden, step for the Ministry to take. Many, however, are left wondering the same thing: Why the change of dates? The cynical may think it is just to gather additional revenue as soon as possible, but according to Finance Minister, Fernando Haddad, this has nothing to do with it.

Many, including Fernando, believe Brazil is suffering from a ‘gambling epidemic,’ citing the nearly 25 million people who placed sports bets in the first seven months of 2024, averaging around 3.5 million new users per month.

Read more about the dangers of using grey market platforms in our blog, ‘Before Brazilian regulation: The dangers of gambling without KYC.’

Gambling has become a serious social problem and one the Finance Minister has vowed to crack down on.

It is for this reason, among many others, that underscore the importance, and appetite, of the Brazilian sports betting market becoming regulated as soon as possible. Further checks, linked to spending limits, are likely to be mandated in the coming months to protect the most vulnerable groups, including the elderly and those receiving state benefits.

How many Brazilian operators have applied for a license?

So far, 132 companies have submitted their applications to continue operations, and that number is constantly increasing as the deadline approaches.

All active operators who submit an application before the October 1 deadline will be allowed to continue operating, providing they are granted a license. Companies that applied before August 20 have been assured that they will receive a decision from the government regarding the status of their license application in time to be able to start operating from January 1. Those that applied after this date can continue to operate as usual until the end of the year but may not be able to continue ‘business as usual’ once regulated betting begins from January 1. Those that operate without a license run the risk of a fine and being banned for up to 10 years.

After the October 1 deadline, the government will process the pool of applicants and issue licenses. The government has until November 17 to give companies an answer and as of right now there has been no mention of a limit on the number of licenses being given. Once an operator has been approved, they then have 15 days to pay the fee of 30 million reais ($5.45 million) in order to continue doing business.

Playing by the rules: What are the guidelines going forward?

Once the official ordinances go into effect at the beginning of January, all authorized gambling operators must use the bet.br domain in order to be distinguished from unauthorized platforms, including .com.

Credit cards will no longer be a valid form of payment and instead only debit cards or Pix (the Brazilian instant payment system ecosystem) will be accepted.

Operators will also be required to perform a robust identification process during onboarding. It is therefore of utmost importance to choose an identity verification provider that can meet the very specific challenges of the Brazil market:

Be able to process a wide range of different Brazilian identity documents. Perform KYC and gather customer information, such as full name, date of birth and valid identification documents. Verify citizens’ CPF numbers and ensure that they do not appear on any of the various government-mandated restriction lists. Facial biometric verification is now mandatory for registration, with regular facial re-verifications a requirement.

Operators must also monitor transactions for suspicious activity to prevent money laundering, fraud and underage gambling. The goal of these KYC measures is to enhance consumer protection, promote responsible gambling and ensure compliance with Brazil’s broader efforts to create a transparent and secure gambling environment. Non-compliance with these requirements can result in severe penalties, including the loss of the operator’s license.

Operators will therefore need to ensure their platform can onboard customers quickly, safely and securely. A robust KYC process guarantees that customers are who they say they are, while preventing fraud and protecting the business and the customer at the same time.

Ronaldo Kos, LATAM Gaming at IDnow
Play on with IDnow.

At IDnow, we offer advanced Know Your Customer (KYC) services to assist with existing and upcoming regulations, in Brazil and beyond.

Our automated and secure identity verification solutions enable operators to quickly and accurately verify player identities, ensuring compliance with Brazil’s stringent KYC requirements.

We support real-time document verification, biometric checks and fraud detection, which helps operators prevent money laundering and underage gambling while maintaining a seamless user experience.

Read more about our Brazil-ready identity verification services.

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


SC Media - Identity and Access

Federal probe on Temu's data practices sought

In a letter to the U.S. Securities and Exchange Commission and the FBI, Republican members of the House House Permanent Select Committee on Intelligence noted that such a probe is warranted due to the alleged relationship between Temu and Pinduoduo executives and the Chinese Communist Party.

In a letter to the U.S. Securities and Exchange Commission and the FBI, Republican members of the House House Permanent Select Committee on Intelligence noted that such a probe is warranted due to the alleged relationship between Temu and Pinduoduo executives and the Chinese Communist Party.


Ocean Protocol

DF108 Completes and DF109 Launches

Predictoor DF108 rewards available. DF109 runs Sept 26 — Oct 3, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 108 (DF108) has completed. DF109 is live today, Sept 26. It concludes on October 3. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE r
Predictoor DF108 rewards available. DF109 runs Sept 26 — Oct 3, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 108 (DF108) has completed.

DF109 is live today, Sept 26. It concludes on October 3. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF109 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF109

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF108 Completes and DF109 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


DHIWay

Digital Public Infrastructure is the linchpin of trustworthy data exchange

The honourable Finance Minister presented a much-anticipated budget for FY 24-25, and those who are familiar with Digital Public Infrastructure (DPI) will find some topics interesting. The budget mentioned DPI to support farmers by improving access to digital tools. An extension to that is data digitization and reusability through the Unique Land Parcel Identification Number […] The post Digital

The honourable Finance Minister presented a much-anticipated budget for FY 24-25, and those who are familiar with Digital Public Infrastructure (DPI) will find some topics interesting. The budget mentioned DPI to support farmers by improving access to digital tools. An extension to that is data digitization and reusability through the Unique Land Parcel Identification Number (ULPIN) or “Bhu-Aadhaar” and the digitization of land records in urban areas with contextual GIS data.

Bringing DPI into a discussion around data digitisation, enabling reusable data exchange and creating digital identifiers opens the IT architecture to the concept of “trust layers”. Today, as many citizen services are transitioning from traditional Web 2.0 models to Web 3.0 design patterns, trustworthy data exchange is critical to reducing transaction friction. The “trust” component of data exchange is enabled through distributed ledger technologies (DLTs) such as blockchain. As new service deployments bring about digital identifiers, digital cards, and records, and, more importantly, there is a need to have authentic data sources to query, it is vital to view DPIs from the perspective of Open Trust Infrastructure.

Public instances of a blockchain, such as CORD, form the foundation of the Open Trust Infrastructure. Combining open standards, networks, and protocols with open innovation makes the available building blocks critical for designing applications and services around them. Trustworthy data exchange is enabled through the issuance and acceptance networks of verifiable credentials (VCs) and verifiable data streams, allowing purpose-specific data sharing and verification with a notice/consent design embedded within.

The trust layer of the blockchain will bring about open infrastructure such as data fingerprinting registries, credentialing registries, and key management services. Data registries on the blockchain allow regulatory authorities to openly govern data while ensuring it is accessible to services with high fidelity and quality. Among the nine priority areas highlighted in the Budget 2024 speech, each can have a multiplier impact when DPIs with data registries are enabled for the services and innovative solutions can be incubated around such infrastructure.

Verifiable Credentials are increasingly becoming mainstream – sometimes without the end consumer realizing that the records they manage are secure, tamper-resistant VCs. This is a positive sign that the end-consumer experience can be transformed from a paper-based system of low-trust and low-fidelity to a high-trust and high-fidelity digitally secure method. With this, it is not surprising that organizations and educational institutions are today using credentialing platforms to issue VCs for learning and educational records (LERs), Skilling and Knowledge Records (SKRs) and Workplace Credentials (WPCs). Secure, verifiable credentialing is a natural outcome of an Open Trust Infrastructure.

With DPIs and Open Trust Infrastructure, it is crucial to view the multiple views and access to data services from the perspective of digital trust ecosystems. This, in turn, ensures better governance and operational management of such infrastructure through consortium-like approaches, which provide participants with necessary incentives to maintain and improve these digital rails.

As ISVs and SIs come together to implement the critical parts of the digital infrastructure required to deliver the records such as skilling and knowledge credentials or the Kisan Credit Cards – with the DPI design model, it will be possible to use the CORD blockchain to rapidly prototype and deliver production-ready software that is capable of being used at nation-scale. The CORD Blockchain also allows for a decentralized and federated approach to data governance, data management and data pipelines which addresses the topic of legacy data silos and interoperability.

CORD Blockchain sandboxes allow ISVs to design and deploy applications which follow the guidelines of Digital Public Goods (DPGs) on infrastructure that is aligned with the principles of DPI. The growing number of use cases, an uptick in the number of tenders and RFPs that mention blockchain as the preferred underlying infrastructure, and a better appreciation of the concept of reusable digital identifiers have contributed to the rapid adoption of Web 3.0 models. Together with empowering the data principals with more fine-grained control over the data, such an approach also brings in newer ideas in economic models around sustainable and scalable infrastructure.

The post Digital Public Infrastructure is the linchpin of trustworthy data exchange appeared first on Dhiway.


Okta

Propel Your SaaS Apps Into the Future at Oktane

We’ve been discussing and reflecting on the Future of Identity over the last couple of months. It’s apparent to us that Identity is rapidly growing in its complexity. The surface area that our customers need to protect is growing, like a sunrise revealing a hidden terrain in the morning twilight. We realize that in a short time, the growing demands of customers will start to influence the roadmaps

We’ve been discussing and reflecting on the Future of Identity over the last couple of months. It’s apparent to us that Identity is rapidly growing in its complexity. The surface area that our customers need to protect is growing, like a sunrise revealing a hidden terrain in the morning twilight. We realize that in a short time, the growing demands of customers will start to influence the roadmaps of SaaS companies and their developers to keep pace with protecting their customers and differentiating their value in their respective markets. The timing of this discussion couldn’t be better! We would love to meet you, hear about your vision and challenges, and nerd out on Identity and Software Development. Join us at Caesars Forum in Las Vegas, NV, on October 15-17, 2024, for Oktane, the biggest identity event of the year, and learn how to propel your SaaS apps into the future by connecting with Okta!

If you are currently not attending Oktane (but would like to) and you build SaaS apps, please reach out to us at wic-dev-advocacy[at]okta.com to request information regarding how you can obtain a pass. Limited number of passes available so reach out soon!

We planned fantastic events to help you take your SaaS apps to the next level by leveraging Okta’s identity and user lifecycle platforms. Find us and let’s chat at these activities:

Breakout sessions

Building a SaaS Application with CIC
Wednesday Oct 16, 3:45 PM

Empower your Ecosystem with Okta
Thursday Oct 17, 12:45 PM

B2B SaaS App of the Future
Thursday Oct 17, 2:30 PM

Stop by the Oktane Dev Hub

Take a drive through the Oktane Expo Hall, and you’ll find the SaaS B2B experience in the Dev Hub at the intersection of SaaS Way and Integration Drive. Here, you’ll discover the ways Okta can help you create secure B2B SaaS applications. You’ll learn about identity and user lifecycle management best practices. Then build these standards-compliant solutions into your apps so you can submit them to the Okta Integration Network (OIN) and watch your customer base grow!

Check out the Oktane Hands-on Labs for interactive learning opportunities

Roll up your sleeves and get your coding on. This is your chance to get techy and build code using Okta solutions. Find us at labs where you can pick from options such as:

Scaling Okta App Management by Importing Data from PowerShell into Terraform
Streamline and scale your Okta app management by using PowerShell to export configurations and use Terraform to automate environment transitions.

Universal Logout: Instantly Sign a User Out across All Your Apps
Learn how to lock down all your apps and protect your customers completely at the first sign of trouble with one API!

Identify Inactive Okta Users with Okta Workflows
Determine if you have unused accounts that might have been missed by some manual deprovisioning process.

Labs are first-come-first-serve and have limited capacity. If there’s a lab you’re interested in, be sure to show up on time!

B2B SaaS builders happy hour

Join us at a special event for those who build apps that connect with Okta for their customer base! This happy hour is where all the fun and connections happen. This is a private event exclusively for techy folk who build multi-tenant B2B SaaS applications and want to offer Okta Identity Provider connections as an option for their customers. Does this describe you and are you interested in attending this event? Please contact us at wic-dev-advocacy[at]okta.com to be added to our guest list. You must be an attendee at Oktane to attend this happy hour.

Okta Workflows community meetup

Join the Okta Workflows community meetup during Oktane 2024 in Las Vegas. Meet Workflows community members, colleagues, and friends over drinks and delicious appetizers.

Find resources, solutions, and networking opportunities at Oktane

We’re excited to connect with you and learn about your application’s needs! Please find us at Oktane, and feel free to comment if you have any questions or requests in the meantime.

Remember to follow us on Twitter and subscribe to our YouTube channel for exciting content.


DHIWay

My Journey with Sunbird RC: Revolutionizing Registries and Credentials

My journey with Sunbird began in 2017 when we at the EkStep Foundation joined forces with the Ministry of Education (formerly MHRD) on the National Teacher Portal initiative. This initiative soon evolved into DIKSHA (Digital Infrastructure for Knowledge Sharing), a national platform dedicated to enhancing school education in India. The Vision Behind Sunbird Sunbird was […] The post My Journey wi

My journey with Sunbird began in 2017 when we at the EkStep Foundation joined forces with the Ministry of Education (formerly MHRD) on the National Teacher Portal initiative. This initiative soon evolved into DIKSHA (Digital Infrastructure for Knowledge Sharing), a national platform dedicated to enhancing school education in India.

The Vision Behind Sunbird

Sunbird was conceived as a collection of modular, configurable, and extendable building blocks designed to “share the ability to solve.” Think of it as a set of LEGO blocks or puzzle pieces that can be assembled in myriad combinations to foster innovative solutions within the educational ecosystem.

From the outset, we identified several essential building blocks that would form the backbone of any large-scale transformation. Among these were telemetry, knowledge graphs, data platforms, and crucially, registries and credentials. Our initial work on registries and credentials began as OpenSABER (Open Software Architecture for Building Electronic Registries), utilizing openbadges as a foundational element.

At the heart of it, the idea has been and remains, to enable attested sources of information with ownership for individual’s data in their control and credentialing (or badging) to make this data easily shareable and verifiable.

Registries form the seed of trust in any decentralized ecosystem and credentials empower an individual with control over their data, making it portable, verifiable, inclusive, trustworthy and accessible.

Empowering Individuals Through Data Ownership

At its core, Sunbird RC enables an attested source of information while granting individuals ownership and control over their data. This credentialing process—referred to as badging in the initial days—ensures that personal data is easily shareable and verifiable.

Registries serve as the seeds of trust in any decentralized ecosystem, while credentials empower individuals by giving them control over their data. This makes their information portable, verifiable, inclusive, trustworthy, and accessible to all.

Through the pandemic, as the eGov Foundation developed DIVOC for COVID vaccination management & certification, the global efforts rapidly evolved standards around ‘verifiable credentials’. OpenSABER was re-branded as Sunbird RC (Registry and Credential).

Adapting to Change: The Impact of the Pandemic and Birth of Sunbird RC

The COVID-19 pandemic accelerated the development of digital solutions, prompting the eGov Foundation to create DIVOC for managing vaccination records and certifications. During this period, global standards around ‘verifiable credentials’ rapidly evolved. As a result, OpenSABER was rebranded as Sunbird RC (Registry and Credential) to give a fresh boost of enthusiasm

Sunbird RC has emerged as a vital building block for establishing trusted registries and verifiable credentials across various domains. It powers other Digital Public Goods (DPGs) such as DIGIT, Inji, Sunbird Serve, and Sunbird ED. To date, it has facilitated billions of credentials and diverse registries across sectors like healthcare and education in India.

Sunbird RC offers microservices for credential issuance and management, enabling rapid deployment of electronic registries through configurable schemas. One standout feature is its ability to generate instantly verifiable credentials that can be accessed offline via printable QR codes.

Looking Ahead: Collaboration and Innovation

With the growing traction for credentialing systems globally across various use cases, Dhiway has joined Sunbird RC as a co-maintainer alongside the Centre for Open Societal Systems (COSS). Together, we aim to advance this project further by promoting adoption and enhancing innovations in digital identity, data registries, digital wallets, and credentialing systems.

Sunbird is an open-source collective seeded by the EkStep Foundation. The community has developed about 20 digital solutions—referred to as “building blocks”—that can be utilized individually or combined to create larger and more complex solutions.

I have had the pleasure of working with multiple community members from eGov Foundation, MOSIP, BMGF, some of the hyperscale cloud service providers such as AWS & Google, a few large scale multinational IT companies, a bunch of start-ups, government ministries, and many open-source communities.

I am thrilled to continue my tryst with Sunbird RC and grow it together with the community, leveraging Dhiway team’s extensive experience in building and maintaining open-source projects.

To learn more about Sunbird RC and join our vibrant community, visit https://rc.sunbird.org 

The post My Journey with Sunbird RC: Revolutionizing Registries and Credentials appeared first on Dhiway.


TBD

Preptember: Amping Up for Hacktoberfest 2024

TBD is participating in Hacktoberfest!

With October blazing through, we're greeted by pumpkin spices, the aroma of fall leaves drifting in the rain, and of course, the much-anticipated Hacktoberfest. Whether you're a seasoned contributor or new to open source, there's something for everyone.

🎉 We're Participating in Hacktoberfest 2024!

We have several projects with a variety of issues that we'd love your contributions for! For each issue that's merged, you'll earn points towards the TBD Hacktoberfest Leaderboard. Winners will receive exclusive TBD Hacktoberfest 2024 swag!

We're kicking off Hacktoberfest with two events:

September 27: tbdTV - Hacktoberfest October 2: Show & Tell: TBD Hacktoberfest

Be sure to add them to your calendar.

📌 What is Hacktoberfest?

Hacktoberfest is a month-long (October) celebration of open source software. It's sponsored by DigitalOcean, GitHub, and other partners. Check out Hacktoberfest's official site for more details and to register. Registration is from September 23 - October 31.

📂 Dive into TBD's Participating Projects

We included a wide variety of projects and issues for Hacktoberfest 2024. Each of our participating repos has a Hacktoberfest Project Hub, which contains all issues you can pick up with the hacktoberfest label. For easy reference, repos with multiple projects will have multiple project hubs.

Explore our participating repos below and see where you can make an impact:

developer.tbd.website

Languages: MDX, JavaScript, CSS, Markdown Project Description: Docusaurus instance powering the TBD Developer Website (this site). Links: Hacktoberfest Project Hub | Contributing Guide

web5-js

Language: TypeScript Description: The monorepo for the Web5 JS TypeScript implementation. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub: Protocol Explorer | Hacktoberfest Project Hub: General | Contributing Guide

web5-rs

Language: Rust Description: This monorepo houses the core components of the Web5 platform containing the core Rust code with Kotlin bindings. It features libraries for building applications with decentralized identifiers (DIDs), verifiable credentials (VCs), and presentation exchange (PEX). Links: Hacktoberfest Project Hub | Contributing Guide

dwn-sdk-js

Language: TypeScript Description: Decentralized Web Node (DWN) Reference implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DWA Starter

Language: JavaScript Description: Decentralized Web App (DWA) starter collection. Links: Hacktoberfest Project Hub: VanillaJS | Hacktoberfest Project Hub: Vue | Contributing Guide

DIDPay

Languages: Dart Description: Mobile app that provides a way for individuals to interact with PFIs via tbDEX. Links: Hacktoberfest Project Hub | Contributing Guide

DID DHT

Language: Go Description: The did:dht method and server implementation. Links: Hacktoberfest Project Hub | Contributing Guide

DCX

Languages: TypeScript, JavaScript Description: A Web5 Protocol for Decentralized Credential Exchange. Links: Hacktoberfest Project Hub | Contributing Guide

Goose Plugins

Language: Python Description: Plugins for Goose, an AI developer agent that operates from your command line. Links: Hacktoberfest Project Hub | Contributing Guide

Fllw, Aliased

Languages: TypeScript, JavaScript Description: A reference app for building Decentralized Web Apps. Links: Hacktoberfest Task: Fllw | Hacktoberfest Task: Aliased Hot Tip

Not a coder? No worries! developer.tbd.website has tons of non-code related issues up for grabs.

📝 Guide to TBD x Hacktoberfest 2024

✅ Topic Check: Contribute to projects that have the hacktoberfest label. This ensures your PR counts towards the official Hacktoberfest prizes.

🏷️ Label Insights:

Start with an issue labeled hacktoberfest and comment ".take" to assign yourself the issue. After submitting a PR and having it approved, the PR will be labeled hacktoberfest-accepted and you'll receive points on our leaderboard and credit towards the global Hacktoberfest 🎉 If your PR is marked with a spam or invalid label, re-evaluate your contribution to make it count.

🥇 Code and Conduct: Adhere to our code of conduct and ensure your PR aligns with the repository's goals.

🫶 Community Support: Engage with fellow contributors on our Discord for tips for success from participants!

🆘 Seek Help: If in doubt, don't stress! Connect with the maintainers by commenting on the issue or chat with them directly in the #🎃┃hacktoberfest channel on Discord.

🎁 Leaderboard, Prizes and Excitement

Be among the top 10 with the most points to snag custom swag with this year's exclusive TBD x Hacktoberfest 2024 design! To earn your place in the leaderboard, we have created a points system that is explained below. As you have issues merged, you will automatically be granted points.

💯 Point System WeightPoints AwardedDescription🐭 Small5 pointsFor smaller issues that take limited time to complete and/or don't require any product knowledge.🐰 Medium10 pointsFor average issues that take additional time to complete and/or require some product knowledge.🐂 Large15 pointsFor meaty issues that take a significant amount of time to complete and/or possibly require deep product knowledge. 🏆 Prizes The top 10 contributors with the most points will be awarded TBD x Hacktoberfest 2024 swag from our TBD shop. The top 3 contributors in our top 10 will be awarded very limited customized TBD x Hacktoberfest 2024 swag with your github username on it. Stay tuned to our Discord for the reveal!

Keep an eye on your progress via our Leaderboard.

🎙️ Livestreams & Office Hours

Dive into our jam-packed Hacktoberfest schedule! Whether you're just here for fun or are focused on learning everything you can, we've got you covered:

September 27th, tbdTV Hacktoberfest Kickoff - Tune in for a special stream with Rizel Scarlett and Tania Chakraborty to learn how to boost your career through open source contributions.

October 2nd, Show & Tell: Hacktoberfest 2024 - Explore all our projects, what types of contributions you can make and more with Tania Chakraborty and Rizel Scarlett.

Every Tuesday, Community Office Hours - Join us every Tuesday at 1p ET for the month of October, where we will go over PR reviews, live Q&A, and more. This event occurs on Discord.

Live Events Calendar - Keep tabs on our Discord or developer.tbd.website for our future events & sneak peeks - we're always cooking up something new!

📚 Resources for First-Time Contributors 📖 How to Contribute on GitHub 🛠 Git Cheatsheet 🔍 Projects Participating in Hacktoberfest

Happy hacking and cheers to Hacktoberfest 2024! 🎉

Tuesday, 24. September 2024

SC Media - Identity and Access

Leadership, Privacy, and Navigating Information Security - Todd Fitzgerald - ISW24 #2


Executive Privacy: Safeguarding Leaders in a Hyper-Connected World - Chad Angle - ISW24 #2


Spruce Systems

The Importance of Protecting Digital ID Users from “Phone Home” Surveillance

Keeping Digital Identity Safe with Private Information Retrieval.

Digital identity systems theoretically offer substantial improvements over the current identity status quo, including superior fraud prevention and enhanced user privacy. As the industry comes together around standards and system designs, we at SpruceID firmly believe user privacy must remain front and center.

One of the more challenging aspects of building digital identity is protecting users from surveillance. It’s always been possible to track Individuals through their movement and activities, in the real world and online. It is now quite common for digital data to be used to form profiles of Web users, and a poorly designed digital identity system risks replicating that pattern. This surveillance could happen by any number of legitimate (or not) entities, including commercial “data harvesters” or by ID issuers themselves, such as the Department of Motor Vehicles. 

Below, we outline one effort to combat the risk of surveillance through digital identity systems, using a process known as Private Information Retrieval, or PIR. By using cryptography to obscure remote data queries, PIR can reduce identity-based surveillance and enhance user trust in digital identity.

When A Question Reveals Too Much

One strength of existing physical IDs, such as driver’s licenses, is their natural protection against surveillance. In most cases, someone checking the ID looks at it and verifies its authenticity and resemblance to the holder, and that’s the extent of information capture. There’s no call out to a separate system to verify the legitimacy of that ID, no records kept that you may be showing it quite frequently to a clerk at your local store, and no concerns raised about whether you’re buying a pint or a pint of Ben & Jerry’s. 

This sort of protection is more challenging in a digital system when there is an inherent tendency in technology to generate a robust event log for every transaction. A digital ID system with minimal privacy controls might query a central server for verification whenever your ID is checked and - accidentally or on purpose - create a detailed, real-time feed of your online and real-world activities. That data could have great value to the issuing authority and numerous bad actors, who will no doubt attempt to access that treasure trove of personal information. 

The implications of abuse of a data set containing granular verified behavior of individuals is sobering. Governments could use it to surveil activists and journalists. Abuse and stalking victims could be tracked by their abusers. Even challengers in democratic elections could find themselves targeted by unethical incumbents abusing the system from within. One worrying example may have unfolded recently in China when a local government allegedly used data from a COVID app to lock down protestors worried about frozen bank assets.

This is what’s known as a “phone home” problem in cybersecurity. Current standards for digital ID reduce this risk by storing an issuer’s digital signature on a mobile device, where it can be verified locally rather than needing to query a server. This works much the same way as a hologram on a physical driver’s license, allowing it to be verified locally without generating a digital trail.

But there are still circumstances where remote identity queries are necessary. This creates a design problem for a privacy-preserving digital ID system: how do you query a database without the database being able to record the query?

The good news is that thanks to innovations in cryptography, it’s very feasible to ensure that digital identity systems don’t risk exposing users to surveillance, even when a verifier has to “phone home. " 

Building Private Information Retrieval

A privacy-preserving database query needs to mask many kinds of information: the identity of the querier, the identity of the target of the query, what data is being checked, and the location of the query, for a start. At the same time, the data still needs to be restricted to a specific credential holder. 

This is possible thanks to a process called “Private Information Retrieval,” or PIR. The nuances of PIR can be illustrated by a few hypothetical approaches to obscuring data retrieval. For instance, if a database query downloads an entire database, the server won’t know which specific record the query was after. Another brute-force approach involves keeping many separate copies of a database that can be queried at random, making it hard for any one copy’s controller to aggregate a full picture of any set of queries.

These aren’t very practical solutions, though. We believe there’s much more promise in a relatively recent addition to the PIR toolbox: zero-knowledge proofs, or ZKPs. Using cryptographic encoding, ZKPs transform data, such as an ID holder’s identity, so the data can be confirmed without being revealed.

ZKPs can serve several roles in protecting user privacy during a digital ID database query. First, a package of ZKP-protected data can affirm that a verifier, such as a law enforcement officer, has a right to query an identity database without revealing the verifier’s specific identity. The verifier would then submit the credential that must be verified, again protected by ZKP encoding. This encoded credential could then be checked for validity without revealing the credential holder’s private information. 

This would make it far less possible to keep a record of useful information that could be used for surveillance—“someone from a trusted entity queried some sort of information from some database at some time”—which doesn’t really allow for Sherlock-level sleuthing. It’s this inability to even generate information that could be aggregated for exploitation that makes ZKP so enticing. 

Communicating the Intent of Privacy

71% of Americans now express concern about government use of data. At SpruceID, we expect that holding and demonstrating strong privacy principles will be key to unlocking the acceptance and broad adoption of digital identities. Concepts like Private Information Retrieval should be a standard for any digital identity system, and that ZKP tech is a promising tool in that effort.

Industry practitioners should take lessons from the past 50 years of software development and build personal security and privacy into systems from the start. That’s not to say this work will be simple and seamless. Of course, it will undeniably be challenging not only to design and implement truly privacy-preserving digital identity systems but also to convince a skeptical public. Both will be necessary, though, to foster broad user adoption and make the full promise of digital identity a reality.

Visit our website to learn more about SpruceID's stance on privacy and how we protect digital ID users from phone home surveillance.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


1Kosmos BlockID

Vlog: How 1Kosmos Can Be An External Authentication Method When Using Microsoft Entra ID?

Discover how 1Kosmos enhances Microsoft Entra ID with seamless identity-based authentication and passwordless access. Learn about new external authentication methods and how they empower organizations to protect critical assets, implement Conditional Access policies, and offer users more flexibility and security. Robert MacDonald: Hi everybody. Welcome to our blog. My name’s Rob MacDonald. I’m the

Discover how 1Kosmos enhances Microsoft Entra ID with seamless identity-based authentication and passwordless access. Learn about new external authentication methods and how they empower organizations to protect critical assets, implement Conditional Access policies, and offer users more flexibility and security.

Robert MacDonald:

Hi everybody. Welcome to our blog. My name’s Rob MacDonald. I’m the VP of product marketing here at 1Kosmos, and I’m joined by Vik today.

Vik, how you doing?

Vikram Subramanian:

I am doing great, yeah.

Robert MacDonald:

Awesome.

Vikram Subramanian:

And just for everyone, Vikram Subramanian. I run solutions for 1Kosmos.

Robert MacDonald:

Awesome. And you do a great job at it, Vik, by the way. Appreciate having you.

All right, Vik, listen. Today, I wanted to just have a short little vlog with you about Microsoft. Microsoft has released a new authentication, external authentication method, into its Entra ID platform. And the feature will allow more customers to expand their use of 1Kosmos’s identity-based authentication and passwordless access capabilities, and far more Microsoft and environments while maintaining all of the Conditional Access policies that they’ve built. 1Kosmos as an external authentication method will allow organizations to seamlessly protect Microsoft resources while also still protecting those platforms that fall outside of the Microsoft coverage.

All that to say, Vik, there are many use cases we can help fulfill to help improve an Entra ID investment. 1Kosmos as an external authentication method, while that is a new feature to Entra ID, what is it?

Vikram Subramanian:

Good question. Many organizations obviously have invested in Entra, and I’m glad that both of us are actually getting that right. It’s not Ontra. It’s Entra.

Robert MacDonald:

It’s not Ontra. It’s Entra.

Vikram Subramanian:

The main use case was that, hey, given that organizations are already invested in Entra, people are already authenticating in Entra and they’re probably using authenticators that are not necessarily complying to requirements that the enterprise has, or not necessarily having the experience that the enterprise has. It is a pretty big change if we tell people to actually move all of their authentication and utilize 1Kosmos as an IDP.

A great use case over here is, and we’ve always been asked by clients is, can we used 1Kosmos as an MFA within our Entra ecosystem? And now, with external authentication methods, we can. And what this provides is the ability for the enterprise to go ahead and introduce 1Kosmos to their end users and slowly start migrating them towards utilizing passwordless in its entirety.

Robert MacDonald:

Interesting, okay. That’s a lead into my next question, which is, with EAM, or external authentication methods, within Entra ID, how can one cause most help organizations within that kind of use case?

Vikram Subramanian:

A great use case is where organizations want to protect their crown jewels, so privileged assets, restricted assets, or restricted applications, restricted transactions. Anytime anything that requires an MFA, you can put 1Kosmos as the authenticator of choice within your Conditional Access policies. Earlier, you were not able to do this. It’s a great feature introduced by Microsoft, and we have immediately jumped and integrated with them utilizing that feature, which means now within the Conditional Access policy, you can select 1Kosmos as the authenticator for when certain conditions are met. And you can also specify what kind of authentication do you want the user to do. Do you want to depend on device biometrics or the superior Live ID that we offer?

Robert MacDonald:

Fair enough. With this change to the way in which Microsoft’s offering their Entra ID solution, why is that important not only to organizations, but maybe the industry at large?

Vikram Subramanian:

See, now this is the increase or introduction of choice. Earlier, within the Microsoft ecosystem, the choices were very limited in terms of who were the authenticators you could use for doing MFA. And now, with the open ecosystem that has now been introduced, 1Kosmos can also be utilized by organizations. There are many organizations, many of our clients who are already investing in Entra, or have invested in Entra, could not leverage 1Kosmos without really making a huge organizational change and were stuck in their implementation. Now, this frees them up. They can utilize 1Kosmos as an MFA solution or as a passwordless solution. And it gives them choice.

And they can also utilize Conditional Access policies. That is very important. Why is Conditional Access policies important? Because everyone has it. Everyone is going to be using it. And now, you can also utilize that for Live ID.

Robert MacDonald:

Awesome. That’s amazing.

Vik, I appreciate you swinging by today and going through this quick use case with us on our vlog. I look forward to talking to you on our next one.

Vikram Subramanian:

Absolutely.

The post Vlog: How 1Kosmos Can Be An External Authentication Method When Using Microsoft Entra ID? appeared first on 1Kosmos.


IDnow

Paperless signing: Discovering the advantages of digital signatures.

IDnow ebook reveals how the latest digital signature solutions can help unlock valuable business opportunities. From symbols and pictographs, to ink and pen, signatures have been around for thousands of years, dating all the way back to 3000 B.C. Interestingly, the idea of digital signatures can be traced to the Wild West era when businesses […]
IDnow ebook reveals how the latest digital signature solutions can help unlock valuable business opportunities.

From symbols and pictographs, to ink and pen, signatures have been around for thousands of years, dating all the way back to 3000 B.C. Interestingly, the idea of digital signatures can be traced to the Wild West era when businesses used the dotted communications of Morse code and telegrams to sign contracts.

Fast forward to the 20th century, when in 1976 the first concept of a digital signature was introduced by cryptographers, Whitfield Diffie and Martin Hellman. Today, compliant and fully digital signatures are integral to everyday business operations, securely facilitating contract signings worldwide, instantly accessible from any location, at any time.

As the world shifts from physical to electronic, digital signatures will be essential in ensuring trust and authenticity in transactions. Digital signatures can streamline the process of signing contracts and enhance a company’s ability to verify, comply and protect digital identities, fortifying the trust needed to thrive in an increasingly digital world.

Click below to check out our latest ebook, ‘Expert guide to digital signatures.’

Expert guide to digital signatures. What are digital signatures and the history behind them? Download to discover: The different types of digital signatures Benefits of implementing a digital signature solution How IDnow can help unlock valuable business opportunities Read now Navigating the different types of digital signatures.

Much like a traditional wet ink signature, a digital signature serves as legal evidence when concluding a transaction. However, the advantages of digital signatures extend far beyond their physical counterparts. They are faster, entirely digital, and can be signed remotely from anywhere at any time, eliminating the need for signers to visit a physical location or branch.

This convenience not only streamlines the process but also enhances efficiency, making it easier and quicker to finalize important documents without the logistical challenges associated with physical signatures. Different types of digital signatures include:

Simple Electronic Signatures (SES), which can be as straightforward as typing a name or clicking an “I agree” button. They are versatile and accessible, making them ideal for everyday transactions. However, unlike Advanced Electronic Signatures and Qualified Electronic Signatures, an SES lacks strong authentication and data integrity measures. It is best suited for low-risk, informal agreements and may not be appropriate for high-value or legally sensitive transactions. Advanced Electronic Signatures (AES) uniquely identify and link the signer to the signature, ensuring it can be attributed to them. They meet legal requirements for court admissibility and comply with regulations like the EU’s eIDAS. While not as robust as a QES, an AES still offers significant legal weight and is recognized as a secure and reliable electronic signature in many jurisdictions. Qualified Electronic Signatures (QES) are highly secure digital signatures backed by a qualified certificate from a trusted Certificate Authority. This certificate meets stringent regulatory standards and links the signature to the signer’s verified identity, ensuring the highest level of legal recognition and security. Embracing digital signatures.

The use of digital signatures has risen by 50% since the COVID pandemic, according to airSlate, with 69% of respondents continuing to use digital signatures due to their increased convenience and security.

This digital signing boom shows no signs of slowing, as the global digital economy continues to expand, with people increasingly living, working and transacting across borders, industries and use cases in this remote, digital landscape. Digital signatures, which combine remote, 24/7 convenience without compromising on security, are purpose-built for the digital world.

The increasing importance of digital signatures was put into the spotlight in July 2016, when the European Union (EU) issued the eIDAS regulation (electronic Identification, Authentication, and Trust Services), which increased the significance of electronic signatures drastically. Digital signatures are therefore widely used in the EU for various purposes, primarily to ensure the authenticity, integrity and non-repudiation of electronic documents and transactions.

For example, digital signatures are necessary and extremely important in the following situations:

Legal contracts and agreements: Digital signatures are used to sign legally binding contracts and agreements, including sales contracts, employment agreements and service contracts. They provide assurance that the signer has accepted the terms of the document. Financial transactions: In the EU, digital signatures play a crucial role in financial transactions, including online banking, electronic fund transfers and digital payments. They help verify the identity of the parties involved and ensure the security of the transaction. Regulatory compliance: Digital signatures are often required to comply with various EU regulations and directives, such as the eIDAS regulation which establishes a legal framework for electronic signatures, seals and time stamps. The importance of the eIDAS Regulation.

The eIDAS Regulation (EU No 910/2014) stands as a cornerstone in Europe’s digital landscape, harmonizing rules for electronic identification and trust services across the European Single Market. It sets stringent standards for electronic signatures, notably QES, ensuring they carry equivalent legal weight to traditional handwritten signatures.

Crucially, eIDAS mandates mutual recognition of electronic identification methods between Member States, facilitating seamless cross-border transactions. This regulatory framework not only enhances security and trust in electronic communications but also promotes the digital economy’s growth by enabling secure and legally binding electronic transactions throughout the EU. This includes when an individual wants to sign a document with a QES and needs to be identified and verified for security purposes such as the following:

Identity verification: The identity of individuals must be verified before issuing a qualified certificate for electronic signatures. This can be done using automated methods but must meet high security and reliability standards. By putting these applications together, this guarantees that a high level of security and reliability is met when undergoing all types of document signing. Remote identification: Automated processes for remote identity verification are permissible under eIDAS, but they must ensure the same level of assurance as physical presence verification. Techniques may include video identification, use of eID cards or other secure methods depending on the specific EU country and industry. Unlocking the benefits of digital signatures – from security to sustainability.

From removing the friction and cost associated with manual processes, to preventing fraud and fueling growth, implementing digital signatures deliver a myriad of benefits, enabling businesses to:

Improve security: Digital signatures use cryptographic techniques to ensure the authenticity, integrity and non-repudiation of signed documents, making them highly secure and resistant to tampering or forgery. Over 70% of users report fewer security and compliance incidents. Fight fraud: In recent years, the global volume of digital fraud attempts has increased by 80% according to TransUnion. By using digital signatures, the identity of the sender is verified through a unique digital certificate that links the signature to the sender’s identity, making it difficult for fraudsters to impersonate or steal someone’s identity. Enhance user experience: With digital signatures, customer satisfaction is increased for more than 70% of users as the friction and frustration experienced in physical signing is removed, giving an enhanced, more streamlined user experience. Establish and build trust: Organizations that have implemented digital signatures report a 500% increase in customer loyalty. By ensuring the authenticity, integrity and non-repudiation of digital documents and transactions, digital signatures reassure users that their communications and transactions are protected from tampering and fraud. Cut costs: Without the need for printing, mailing and storing paper documents, digital signatures reduce hard costs by an average of 56%, creating a more efficient and cost-effective process. Go green: By reducing paper usage and transportation associated with physical document signing, digital signatures help reduce the environmental impact of traditional paper-based processes and could save up to 2.5 billion trees in less than 20 years. Achieve compliance: With nearly €1.87 trillion of global GDP tainted by money laundering each year, it is imperative for companies to meet all regulatory standards. Digital signatures, recognized as legally equivalent to handwritten signatures in many jurisdictions—including the EU’s eIDAS regulation and the US ESIGN Act—help businesses avoid fines and meet compliance requirements efficiently. Boost conversions: With users signing 79% of agreements within 24 hours, digital signatures enable automated signing workflows, allowing documents to be routed, reviewed and signed electronically – streamlining business processes, reducing bottlenecks and accelerating decision-making. Increase accountability: Digital signatures often include built-in audit trail capabilities, recording information such as the identity of the signer, the time and date of signing and any changes made to the document after signing. Because of this, companies witness an 80% reduction in signing errors, helping ensure accountability and transparency. Scale and drive growth: Digital signatures can be used globally, making it easier to conduct business across borders and collaborate with partners, suppliers and customers in different locations. It comes as no surprise that global e-sign transactions have risen from 89 million to 754 million in just over five years. Why sign with IDnow?

At IDnow, we provide comprehensive signing solutions tailored to meet the diverse needs of any business operating in today’s global digital economy. Whether you need fully automated, video, eID, or in-person, IDnow delivers a versatile, secure and enhanced user experience, ensuring that your customers sign on the dotted line, every time.

This even includes our newest signing solution InstantSign which issues a QES using any previous AML-compliant identity verification in seconds.

If reverification is needed due to an expired user’s identity document, then InstantSign works seamlessly with IDnow’s full range of identity verification solutions, keeping ident data up-to-date, and providing the perfect, compliant solution for financial services organizations.

Any ident, from any vendor, anytime—truly one of a kind.

By

Kristen Walter
Jr. Content Marketing Manager
Connect with Kristen on LinkedIn


auth0

DevDay 2024 Recap: What's New In Auth0?

Developer Day 2024 is a wrap and Auth0 announced some cool stuff! Let's recap some of them in this blog post.
Developer Day 2024 is a wrap and Auth0 announced some cool stuff! Let's recap some of them in this blog post.

uquodo

UAE’s 2024-27 AML Strategy: How uqudo supports Fraud Prevention

The post UAE’s 2024-27 AML Strategy: How uqudo supports Fraud Prevention appeared first on uqudo.

Indicio

Three ways decentralized identity delivers transformational Digital Public Infrastructure

The post Three ways decentralized identity delivers transformational Digital Public Infrastructure appeared first on Indicio.

DPI — or Digital Public Infrastructure — is a new, white hot topic in development. In 2023, at a G20 meeting in New Delhi, India, DPI was

“…a set of shared digital systems that are secure and interoperable, built on open technologies, to deliver equitable access to public and/or private services at a societal scale.”

The Bill and Melinda Gates Foundation describe DPI as: 

“Like roads — a physical network essential for people to connect with each other and access a huge range of goods and services.”

In essence, how can governments invest in digital infrastructure that accelerates sustainable development — and what does that technology look like?

A trusted information superhighway?

If you’re thinking that maybe we’ve been here before, you’re half right. The internet and the web that came to sit on top of it utterly transformed how we interact, access and share information, and engage in economic and other activities. 

But the internet and web evolved without a crucial element — a verification layer for people and organizations, which has led to widespread identity fraud, privacy concerns,  and security breaches. So it’s not surprising to see digital identity systems being described as “foundational to DPI.”

Simply put, you can’t create a thriving, inclusive, innovative digital economy if you can’t trust that the person or organization you’re interacting with online is who they claim to be. Similarly, people won’t trust systems that can’t protect their personal data, not least their biometric data. 

This is why decentralized identity and Verifiable Credentials can be justifiably described as “game-changing” technology for DPI. Here, we explain three of the most important benefits 

1. Seamless authentication and data sharing

Decentralized identity means that people or organizations or devices hold their own data in secure Verifiable Credentials in digital wallets on mobile devices.  First, this eliminates the need for centrally storing personal or other valuable data in order for identity and verification to be managed. This removes a major security risk and provides data privacy. People can now consent to sharing their data. Companies and organizations are freed from onerous data privacy compliance.

Second, the source of a Verifiable Credential — the organization that issued — is always knowable. The data in the Verifiable Credential is digitally signed, which means that if someone tries to alter it, it will be automatically detected. Verifiable Credential data can be shared by creating a link or QR code from simple software on a mobile device and verified with simple software on the web or on a mobile device.

Add these all up and you have portable trust. You no longer have to engineer complex direct integrations to share data; and if you trust the source of the credential — say a bank, a business, or a government — you can act on the information instantly, because you know it hasn’t been altered. 

Decentralized Identity means information from anywhere can be verified anywhere. If this seems abstract, think of it in the context of, say, India, which has 63 million micro industries. With Verifiable Credentials, each of these economic actors can authenticate who they are interacting with and share data that can be trusted.

It gets better. By combining Verifiable Credentials with decentralized identifiers, people and organizations can authenticate and interact with each other directly, across secure communication channels (DIDComm). This communication protocol enables their mobile devices to take on the functionality of an API but with better security. Now they can integrate and use information in much more powerful ways.

Think of it as the capacity to create secure digital roads. These roads have no tolls. They are not owned by a platform, which means that the value created by digital interaction goes directly to those creating the value. These roads can be created from anyone to anyone, anywhere to anywhere.

2. Rescue biometric infrastructure from catastrophic failure

Biometrics are a powerful way to manage authentication: We bring our own, they don’t have to be remembered or constantly changed, and they’re fast to verify. For these reasons, they are being >rapidly adoptedeverywhere. 

But though biometrics are supposed to replace passwords, they, unavoidably, have replicated one of the critical architectural weaknesses of password authentication: centralized storage. In order to verify a biometric template (essentially, a hash of a biometric), a verifying entity must store that template in a database.

Centralized storage has already led to catastrophic security failures, and the factor that makes biometrics so powerful — their uniqueness — turns their theft into an existential risk: You can reset a password, you can’t reset yourself, biometrically. 

Verifiable Credentials make, these problems go away — giving you all the benefits of biometrics without the need for centralized storage. 

Here’s how it works: When a person’s biometric is first captured during identity assurance, the biometric template is also rendered and issued to them as a Verifiable Credential. This means that when a person presents for a biometric scan, they also present their biometric VC. The verifying entity compares the scan to the template in the credential.  That’s it — all the benefits of biometric authentication without the need for centralized storage.

“Bring your own biometrics” also provides a way to deal with the problems of biometric fakery, whether using silicone masks or generative-AI “deepfakes.” By requesting a credential containing a biometric template, verifiers have a way to double check the person is who they really are.

3. Decentralized governance

In creating Digital Public Infrastructure for people to authenticate and share data, it is impossible to know in advance every possible way they will use it to create value. 

Decentralized Ecosystem Governance is a simple way for the entity responsible for each use case to implement the governance rules it needs for its roads work and be accountable to its users (e.g., which credential issuers can be trusted, what information flows are needed). 

This way of implementing governance (through machine-readable files that are propagated to each participant’s credential software) has the information-handling capacity to meet whatever variety the system throws at it, not least by virtue of localizing the governance decision-making and making it easy to implement changes based on feedback. 

For example, when we worked with the government of Aruba to implement Verifiable Credentials for Covid testing, the government needed to be able to rapidly change verification workflows based on new scientific information (e.g., test type, test validity times). Decentralized Ecosystem Governance was developed to meet this need. It has been developed into a full specification for governance by the Decentralized Identity Foundation (DIF).

As Digital Public Infrastructure, decentralized identity combined with decentralized governance gives people and entities the control they need to create frictionless ecosystems, the ability to rapidly respond to feedback, and clear accountability.

Lightweight and resilient

All these solutions can be implemented in a matter of weeks and without the eye-watering costs normally associated with infrastructure projects. That’s because Verifiable Credentials can work with rather than require replacing existing systems. And, because they are based on interoperable standards and open-source code, they can unify disparate systems.

This effectively makes decentralized identity a universal DPI layer for seamless authentication and data sharing, and one that can scale easily and start generating network effects rapidly. 

Not every version of decentralized identity delivers the best possible combination of benefits. To learn more about the options you have, and how our government customers are using this technology to drive digital transformation, contact us and book a free, no-obligation workshop where we’ll analyze and discuss  your use case.

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Three ways decentralized identity delivers transformational Digital Public Infrastructure appeared first on Indicio.


Ontology

Telegram’s Policy Shift: The Need for Decentralization and Stronger Privacy Protections

The arrest of Telegram CEO Pavel Durov and the platform’s subsequent decision to provide user data to authorities has sparked widespread concern, not just among privacy advocates but also in political dissident communities. This moment marks a critical turning point in the ongoing debate about balancing privacy and regulation in digital spaces. But beyond Telegram’s headlines lies a broader narrat

The arrest of Telegram CEO Pavel Durov and the platform’s subsequent decision to provide user data to authorities has sparked widespread concern, not just among privacy advocates but also in political dissident communities. This moment marks a critical turning point in the ongoing debate about balancing privacy and regulation in digital spaces. But beyond Telegram’s headlines lies a broader narrative — one about centralized platforms, their vulnerabilities, and the growing urgency for decentralization and self-sovereign identity.

The Fallout of Durov’s Arrest

As reported recently, Durov’s arrest at a Paris airport and the criminal charges he now faces have cast a spotlight on the inherent risks of centralized platforms. Telegram, which has been lauded as a beacon for privacy and free speech, now finds itself caught between the demands of law enforcement and the privacy expectations of its nearly billion-strong user base.Telegram’s new policy — to hand over user data like IP addresses and phone numbers to authorities with valid legal requests — marks a significant shift. The app, once seen as a safe haven for political dissidents, journalists, and activists in oppressive regimes, is now under scrutiny. Critics question whether this change will make Telegram more susceptible to the influence of repressive governments, undermining the platform’s core mission of protecting user privacy.

But Durov’s predicament is not just about Telegram. It’s a wake-up call for the entire digital ecosystem and a reminder of how centralized platforms are vulnerable to external pressures — from governments, corporations, or even internal mismanagement.

Centralization’s Fatal Flaw

As I previously discussed in my article, “The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections,” the key issue with centralized systems is their susceptibility to single points of failure. Whether it’s the CEO of a company being detained or a server being seized, centralized platforms are fragile by design. The arrest of Durov underscores how much risk is embedded in centralized models. When the figurehead or infrastructure of a platform is compromised, so too is the privacy and security of its entire user base.

Telegram’s decision to share user data highlights the thin line that centralized platforms walk. Their leadership can be coerced, their systems can be hacked, and their policies can be bent to serve the interests of governments, often at the expense of user privacy. This is where decentralization steps in as a necessary solution.

Decentralization: The Answer to Protecting Privacy

In contrast, decentralized systems are designed to be resistant to these kinds of pressures. As I explored in “Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms,” platforms built on decentralized frameworks lack a central authority that can be easily compromised or coerced. Instead, they rely on distributed networks that empower users with control over their data and communication.For instance, decentralized identity (DID) is a transformative technology that allows individuals to own and manage their identities across platforms without needing to rely on a centralized entity like Telegram. With DID, there’s no single point of failure; no CEO can be arrested, no server can be seized, and no government can force a handover of user data. Users control their own credentials, and privacy becomes a fundamental right, not a privilege that can be revoked.

The recent developments at Telegram highlight how critical it is to shift toward decentralized identity systems. When platforms have no central control, they also become inherently more resistant to censorship and government overreach. In an era where governments are increasingly using the guise of regulation to invade privacy, decentralized platforms are not just a better alternative — they are becoming a necessity.

Striking a Balance: Decentralization with Responsibility

Of course, decentralized systems are not without their challenges. As we’ve seen with platforms like Silk Road and Tornado Cash, the anonymity offered by decentralization can sometimes provide a haven for illegal activities. This tension between freedom and responsibility was a central theme in my article on decentralized identity and reputation systems. While decentralized platforms offer privacy and autonomy, they also need systems of accountability.

One potential solution lies in decentralized reputation systems, where users build a reputation based on their actions within the network. This could help decentralized platforms self-regulate, ensuring that while privacy is protected, bad actors are held accountable. Such systems would allow users to engage with decentralized platforms anonymously while maintaining a level of trust and integrity within the community.

The Bigger Picture: What Telegram’s Shift Means for the Future of Privacy

The policy change at Telegram, combined with the increasing governmental pressure on platforms like it, underscores an uncomfortable truth: centralized platforms can no longer guarantee privacy. Whether it’s through government demands or corporate policy shifts, the privacy of users on centralized systems is always at risk.

This is why the shift toward decentralization and self-sovereign identity is so crucial. The power to control personal data and communications needs to be in the hands of the users, not corporations or governments. Telegram’s recent actions should serve as a wake-up call for anyone concerned about their digital privacy. As we move forward, decentralized platforms and identity systems are not just desirable — they are essential to preserving our freedoms in the digital age.

Conclusion: A Call to Decentralize

The arrest of Pavel Durov and Telegram’s subsequent policy shift have set the stage for a larger conversation about the future of privacy and free speech. In a world where centralized platforms are increasingly vulnerable to government overreach, it’s clear that decentralization is the path forward.

If we want to maintain control over our digital lives, we must embrace the technologies that enable it — decentralized identity, staking, and reputation systems. As governments and corporations continue to tighten their grip on the internet, decentralization may be the only way to keep our digital freedoms intact.

Interested in learning more about decentralized identities and how they can revolutionize transparency in venture capital? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Telegram’s Policy Shift: The Need for Decentralization and Stronger Privacy Protections was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


HYPR

PCI DSS 4.0 Authentication Requirements: 5 Things to Know

The Payment Card Industry Security Standards Council recently updated their Data Security Standard (PCI DSS) for protecting payment card data. The latest version, PCI DSS 4.0, introduces more than 60 new or updated requirements, with new directives around passwords and multi-factor authentication (MFA) among the most consequential.

The Payment Card Industry Security Standards Council recently updated their Data Security Standard (PCI DSS) for protecting payment card data. The latest version, PCI DSS 4.0, introduces more than 60 new or updated requirements, with new directives around passwords and multi-factor authentication (MFA) among the most consequential.

What is PCI DSS 4.0?

First introduced in 2004, the PCI DSS guidelines apply to any organization that stores, processes or transmits cardholder data. To demonstrate PCI DSS compliance, organizations undergo assessment on all systems that interact with the cardholder environment.

In March 2022, the Council announced PCI DSS version 4.0, providing guidelines that aim to better secure account holder and payment card data within today’s evolving cyberthreat landscape. Organizations are required to implement PCI DSS 4.0 guidelines in two phases. The first phase deadline was March 31, 2024 and included 13 new mandatory requirements. The next deadline is March 31, 2025, at which time another 51 new requirements, which were only recommendations in the first phase, become mandatory.

While version 4.0 contains updates across the board, some of the most significant relate to strong authentication requirements, specifically password usage and multi-factor authentication (MFA). Weak forms of authentication leave organizations and data vulnerable to brute force attacks, credential phishing and multiple other password-related attacks. Understanding these new requirements is key for PCI DSS compliance. We look at five critical areas as well as their potential impact for your business.

1. PCI DSS 4.0 Password Requirements

One of the most significant updates in PCI DSS version 4 involves stricter specifications regarding passwords. Key PCI DSS 4.0 password requirements (sections 8.3.4-8.3.9) include:

Length and Complexity: Passwords must be at least 12 characters long and use special characters, uppercase, and lowercase letters. Reset and Re-Use: Passwords need to be reset every 90 days. An exception is made if continuous, risk-based authentication is used, where the security posture of accounts is dynamically analyzed, and real-time access is automatically determined accordingly. Limited Login Attempts: According to PCI DSS 4.0 password requirements, after a maximum of 10 unsuccessful login attempts, users should be locked out for at least 30 minutes or until they verify their identity through the help desk or other means. Potential impact of the PCI DSS 4.0 password requirements

Longer passwords are more onerous for users and are more likely to be written down or insecurely saved in files on a device. Forced updates also tend to trigger unsafe user behaviors as people often make only minor changes that hackers are likely to guess. Moreover, all these requirements are likely to result in higher help desk calls. Recent research from Forrester and HYPR shows that the average help desk call costs organizations $42.50/call.

2. MFA Required for All Access to the CDE

Under PCI DSS 3.2.1 guidelines, MFA was required only for administrators accessing the cardholder data environment (CDE). Under the new PCI DSS MFA rules (8.4.2), all access to the CDE must be gated by multi-factor authentication. The MFA requirements apply for all types of system components, including cloud, hosted systems, and on-premises applications, network security devices, workstations, servers and endpoints.

Multi-factor authentication is defined as using two independent factors from the categories:

Something you know, such as a password or passphrase.  Something you have, such as a token device or smart card.   Something you are, such as a biometric element 

In its guidance on authentication factors, Version 4.0 specifically says to look at FIDO (Fast IDentity Online) for the use of  tokens, smart cards, or biometrics as authentication factors. While it stops short of requiring FIDO-based factors, some of its other guidance, as you will see below, points to a clear preference.

Potential impact

The new regulations make clear that multi-factor authentication must be used every time the CDE is accessed, even if a user already used MFA to authenticate into the network under the remote access requirements (see below). This will add significant friction for workers, with potential consequences for both productivity and employee satisfaction. Moreover, most organizations, even if they already use some form of MFA, do not have the correct technology or systems to address the requirement for MFA for desktops, workstations and servers.

3. PCI DSS Now Requires MFA for All Remote Access

Previously, MFA was required for remote access to the cardholder data environment. With this updated PCI DSS MFA guidance, anyone logging in from outside your secured network perimeter, even if they are not actually accessing the CDE, must use multi-factor authentication. This includes all employees, both users and administrators, and all third parties and vendors. This also means that any web-based access must use MFA, even if used by employees on site.

Potential impact

Effectively this means that all of your workforce that are remote, hybrid or have supporting roles outside the organization must use MFA at all times. It also means that any employee using a web-based application to access your networks and systems must use MFA, even if they are on site. In addition to the cost and IT burden of implementing MFA, cumbersome MFA procedures can negatively impact both employee productivity and satisfaction.   

4. PCI DSS MFA Configuration Requirements

The new standard doesn’t just cover who must use MFA and when, it also introduces guidelines on how MFA systems must be configured to prevent misuse. Many traditional MFA solutions are susceptible to man-in-the-middle, push bombing and other attacks that bypass MFA controls. Requirement 8.5 specifies weaknesses and misconfigurations to assess for PCI compliance. These include: 

Your MFA system must not be susceptible to replay (aka man-in-the-middle) attacks. MFA must not be able to be bypassed unless a specific exception is documented and authorized by management Your MFA solution must use two different and independent factors for authentication Access cannot be granted until all authentication factors are successful

As discussed earlier, the PCI DSS guidance on types of authentication factors makes reference to FIDO-based authentication. FIDO authentication is phishing-resistant, eliminates replay attacks and, depending on the FIDO solution, is inherently multi-factor.

Potential impact

If your MFA solution uses SMS, OTPs or other insecure methods, it may not meet PCI compliance requirements.

5. Strong Cryptographic Protocols

While earlier versions of PCI DSS required the use of strong cryptographic protocols to protect transactions and cardholder data, PCI DSS 4.0 extends the cryptographic requirement. With the new rules, any stored sensitive authentication data (SAD) must be encrypted using strong cryptography. 

Potential impact

If your authentication system doesn’t properly encrypt and securely store authentication data, then it may not meet PCI compliance requirements.

PCI DSS Section 8.3.3

It's worthwhile to call out another critical provision of PCI DSS, which though not new, is receiving renewed attention. Section 8.3.3 (previously section 8.2.2) mandates that the user identity is verified before modifying any authentication factor. This is intended to prevent social engineering attacks that target the credential reset / account recovery process. 

 How identity verification can stop help desk social engineering

Meet PCI DSS 4.0 Compliance With HYPR

The new PCI DSS framework now aligns much more closely with the NIST SP 800-63B Digital Identity Guidelines, guidance from CISA and the OMB, and other regulatory agencies that urge the adoption of FIDO-based phishing-resistant MFA and a Zero Trust authentication approach.

HYPR helps organizations comply with PCI DSS MFA requirements as well as multiple other provisions included in the standard. HYPR replaces the traditional password-based approach with secure passwordless authentication that is certified by FIDO and based on passkeys. Core elements of the solution, such as the incorporation of biometric authentication, possession of a trusted device, and cryptographic tokens securely stored on the device TPM or secure enclave, ensure strong, phishing-resistant multi-factor authentication that meets PCI DSS requirements. HYPR also provides secure self-service methods to verify identity for account recovery.

At the same time, HYPR greatly improves the user experience, eliminating the need for long, complex passwords and streamlining multi-factor authentication to a single user gesture. 

To learn how HYPR can help your organization meet PCI DSS 4.0 requirements, contact one of our compliance experts.

FAQs

1. What is PCI DSS 4.0, and why was it introduced?
PCI DSS 4.0 is an updated version of the Payment Card Industry Data Security Standard, announced in March 2022. It aims to enhance the security of cardholder data in response to the evolving cyberthreat landscape. It introduces new requirements, especially in areas such as strong authentication and multi-factor authentication (MFA), to better protect sensitive payment information.

2. What are the key changes in password requirements under PCI DSS 4.0?
Under PCI DSS 4.0, passwords must be at least 12 characters long and include a mix of special characters, uppercase, and lowercase letters. Passwords need to be reset every 90 days unless continuous, risk-based authentication is implemented. Additionally, accounts are locked after 10 unsuccessful login attempts, requiring identity verification for re-entry.

3. How does PCI DSS 4.0 impact multi-factor authentication (MFA) requirements?
PCI DSS 4.0 mandates MFA for all access to the cardholder data environment (CDE), not just administrators. This includes cloud, on-premises, and network components. MFA is also required for any remote access, even if employees are on-site but using web-based systems. The MFA system must be configured to resist attacks such as man-in-the-middle attacks and replay attacks.

4. What is the deadline for implementing PCI DSS 4.0 requirements?
The PCI DSS 4.0 guidelines are being rolled out in two phases. The first deadline, March 31, 2024, included 13 new mandatory requirements. The second phase, with an additional 51 requirements, must be fully implemented by March 31, 2025.

Editor's Note: This blog was originally published August 2023 and has been updated to reflect current timelines and provide additional information.

Tuesday, 24. September 2024

KuppingerCole

Navigating Data Challenges: Unlocking Power of Data Marketplaces

Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advan

Modern enterprises face numerous data-related challenges, including siloed storage, security threats, and compliance requirements, making strategic and efficient data management essential. Navigating complex data landscapes requires ensuring data accessibility and security, while preventing unauthorized access and breaches. Robust data management strategies are key to maintaining competitive advantage and operational efficiency in today's fast-paced business environment. Data marketplaces – platforms that connect data producers of specific data products with data consumers who can leverage them for their own goals and projects – are an emerging technology that can power such strategies.

Join experts from KuppingerCole Analysts and Immuta as they discuss how data marketplaces address challenges in data management. They will explain how this approach can enhance data access control and internal sharing, provide a centralized platform for managing data assets, help break down silos, ensure compliance, streamline governance, improve security, and foster innovation, driving business success in a data-driven world.

Alexei Balaganski, Lead Analyst at KuppingerCole Analysts, will provide an overview of the risks and challenges in managing sensitive data at the enterprise level amidst the evolving compliance landscape. He will discuss how to balance security with accessibility and productivity, offering insights on reducing data friction while meeting regulatory requirements.

Bart Koek, Field CTO at Immuta, will discuss strategies for promoting efficient and compliant data sharing, present practical use cases, explore best practices from real-world implementations of data marketplaces at leading organizations, and provide an overview of Immuta’s Data Security Platform.




liminal (was OWI)

The Business Case for Customer Identity and Access Management in E-Commerce

The post The Business Case for Customer Identity and Access Management in E-Commerce appeared first on Liminal.co.

Microsoft Entra (Azure AD) Blog

Join us at the Microsoft Entra Suite Showcase!

This fall, we are bringing the Microsoft Entra Suite Showcase to cities worldwide. Join us to explore how our latest advancements in secure identity and access management can help safeguard your organization's digital assets.   Announced earlier this year, the Microsoft Entra Suite unifies identity and network access security—a novel and necessary approach for Zero Trust security. It prov

This fall, we are bringing the Microsoft Entra Suite Showcase to cities worldwide. Join us to explore how our latest advancements in secure identity and access management can help safeguard your organization's digital assets.

 

Announced earlier this year, the Microsoft Entra Suite unifies identity and network access security—a novel and necessary approach for Zero Trust security. It provides everything you need to verify users, prevent overprivileged permissions, improve detections, and enforce granular access controls for all users and resources.

 

 

Register now to join us for a half-day event in the following locations:

 

September 23

Mexico City, Mexico

Registration Full

September 25 

São Paulo, Brazil 

Registration Full 

September 30 

Amsterdam, Netherlands 

Register Here 

October 1 

London, England 

Register Here 

October 8 

Dallas, TX, USA 

Register Here 

October 8 

Johannesburg, South Africa 

Register Here  

October 9 

Sydney, Australia 

Register Here 

October 10 

Atlanta, GA, USA 

Register Here 

October 14 

Berlin, Germany 

Register Here 

October 16 

Singapore, Singapore

Register Here 

October 21 

Silicon Valley, CA, USA 

Register Here 

November 6 

Dubai, UAE 

Register Here 

November 12 

Mumbai, India 

Registration coming soon 

November 13 

Paris, France 

Register here

November 14 

Bangalore, India 

Registration coming soon 

December 4 

New York, NY, USA 

Register Here 

December 10 

Chicago, IL, USA 

Register Here 

 

To learn more about Microsoft Entra Suite: 

Read the announcement on the Microsoft Security blog Watch the Zero Trust Spotlight on demand

 

We look forward to seeing you there!


auth0

Authtoberfest 2024 is Here!

Join us this October as we celebrate Hacktoberfest 2024 by encouraging developers to contribute to the open-source community.
Join us this October as we celebrate Hacktoberfest 2024 by encouraging developers to contribute to the open-source community.

Infocert

Download page: Infocert IDC Vendor Profile

Download page: Infocert IDC Vendor Profile Thank you for filling out your information! Click or tap on the image to view and download the presentation: The post Download page: Infocert IDC Vendor Profile appeared first on infocert.digital.
Download page: Infocert IDC Vendor Profile

Thank you for filling out your information!

Click or tap on the image to view and download the presentation:

The post Download page: Infocert IDC Vendor Profile appeared first on infocert.digital.


Ocean Protocol

Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption

Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption Today we are excited to announce the upcoming integration of Ocean Nodes with Oasis Sapphire, to enhance encryption, security, and privacy across the network, while also introducing a revamped incentives program that will better reward node operators for their contributions. By integrating the Oasis SDK, we are t
Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption

Today we are excited to announce the upcoming integration of Ocean Nodes with Oasis Sapphire, to enhance encryption, security, and privacy across the network, while also introducing a revamped incentives program that will better reward node operators for their contributions. By integrating the Oasis SDK, we are taking another step towards achieving our goal of democratized computing, empowering everyone to create and use AI without sacrificing privacy or control.

This post will cover the information on the Oasis Sapphire integration, the improvements to our encryption model, and the exciting overhaul of our incentives program.

In the past, encryption on the Ocean Network relied on a private key stored by the desired Ocean Provider. This system had two major drawbacks:

Trust Issues: Users had to trust the provider to keep the encryption key secure. Provider Dependency: If the provider went offline, the user had to republish the asset with a new provider.

The integration of Oasis Sapphire eliminates these concerns by decentralizing encryption, ensuring that no single node holds undue control. With Sapphire, encryption is managed across the network, providing a more secure and resilient system.

Additionally, we’ve introduced NFT-based trusted node lists, allowing anyone to create a trust list. Only nodes on that list can decrypt and serve assets, ensuring enhanced security and reducing the need to trust individual providers.

Incentives Overhauled: Moving to Oasis Sapphire

Since the launch of the Ocean Nodes in August, we’ve received a lot of positive feedback from you, and the number of nodes grew beyond our expectations–currently sitting at 24,598 nodes (at the time of publication).

During Epoch 2 (Week 37), we realized that 81% of nodes received under 1 FET in rewards, and more than $1,000 in gas fees were spent–which we thought should be put to better use, that is to be redirected to you. Keep reading.

To resolve this, we got to thinking and calculating and we overhauled the incentives program, bringing with it a host of improvements:

Move the incentive program to Oasis Sapphire: this will reduce gas fees and leverage the existing tech stack. Incentives in ROSE: Future incentives will be distributed in ROSE, the native token of the Oasis Network. Uptime Criteria: To encourage stable and reliable nodes, only nodes with at least 90% uptime will be eligible for rewards each week (lowered from the industry standard of 95%). The total incentive pool will be split evenly between all eligible nodes. 250,000 ROSE per Epoch: We will distribute a total of 250,000 ROSE per epoch, giving significant incentive to maintain high availability. That’s >2x of rewards.

We’ll let you focus on point 4 of the above for a while.

Now, of course this requires a bit of time and work on our end, and patience on your end. This means that, in order for all of the above to be achieved and to make the transition to Oasis Sapphire as smooth as possible, we are temporarily pausing the distribution of incentives. Monitoring will continue, and your rewards will accumulate during this period. Once the move to Sapphire is complete, the cumulated rewards will be converted into ROSE and distributed to eligible nodes.

For Epochs 3 and 4 the allocation and eligibility criteria will remain the same, but incentives will not be distributed at the end of the epochs. As said, after we transition to Oasis Sapphire, we will distribute the previously cumulated rewards in ROSE, ensuring no one misses out on their well-earned rewards.

Summary: What Node Operators Need to Know Higher rewards for reliable nodes: Nodes that maintain 90% uptime or higher will qualify for rewards, with a significant reward pool of 250,000 ROSE per epoch. Rewards will accumulate: Even though incentives will not be distributed until the integration is complete, your rewards will still be tracked and cumulated. ROSE Rewards: Once incentives resume, rewards will be distributed in ROSE, benefiting from lower gas fees and efficient distribution.

The transition to Oasis Sapphire represents a major leap forward in our commitment to decentralization, privacy, and security. With better incentives (250K ROSE), more efficient rewards, and a fully decentralized encryption system, Ocean Nodes are now positioned to support the future of AI and data sharing on a global scale.

We are excited to see how this new phase empowers our community, and we remain dedicated to providing a network that rewards active participation and maintains the highest standards of decentralization and privacy.

Set up your node today and be part of the next chapter in decentralized AI!

Ocean Nodes & Oasis Sapphire Integration — A New Era of Decentralized Encryption was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny

The post AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny appeared first on Tokeny.

LUXEMBOURG, 24th September 2024 – AMA-AMBIOGEO, a pioneering mining Joint Venture (JV) with a focus on sustainable resource management, announces the tokenization of $4.6 billion in gold reserves, using Tokeny’s technology to transform previously inaccessible real-world assets (RWA) into tradable and compliant digital securities.

The JV between AMA Resources and AMBIOGEO adds up 70 years of mining experience and fuses the talent of people distributed worldwide to change the financial capability of miners in South America, becoming one of the largest and most important exploration companies in that region and standing out as a unique player in the natural resources sector.

The Supernova Project is the inaugural venture in which AMA-AMBIOGEO has tokenized its gold reserves. This project combines two reserves located in northern Brazil: Supernova and Riacho Seco, holding a total of 474 metric tons of gold certified under the S-K 1300 standard, with an economic value of $36.8 billion at the time of tokenization. The discounted cash flow (DCF) or present value given to the asset was, however, 12.5% or $4.6 billion, accounting for extraction costs, current state of the mines and time value of money. The asset has been transferred to a Wyoming LLC of which its equity securities have been tokenized and are being promoted through a Private Placement offering under SEC Regulation D, Rule 506(c), and Regulation S for non-US investors.

AMA-AMBIOGEO’ sustainability model leverages tokenization to unlock the financial value of gold reserves while allowing most of the minerals to remain in the ground. By converting these reserves into digital securities, investors can own a stake without the need for physical extraction, preserving the environment and providing liquidity through fractional ownership.

Unlike owning physical gold or land, co-ownership of tokenized proven gold reserves offers a sustainable way to unlock value without extraction. Our goal is to modernize the mining industry through innovation and sustainability. Partnering with Tokeny, we’re bringing $4.6 billion in gold reserves onchain, offering a digital experience with features like self-custody, transferability, and collateralization, capabilities that were never before available to investors. Ernesto BernadetCEO of AMA Resources

Tokeny’s role as the technology provider for this innovative project ensures secure, compliant, and efficient tokenization, using the ERC-3643 standard to ensure compliance and interoperability. It lays the foundation for broad distribution in multiple marketplaces and future integration with DeFi platforms to enable innovative features yet to come.

Compliance is the backbone of RWA tokenization, and AMA-AMBIOGEO’s dedication to enforcing it onchain sets them apart. We’re proud to support their efforts with our onchain operating system, enabling them to issue, manage, and distribute permissioned ERC-3643 tokens that only qualified investors can access and trade, all while keeping the door open for DeFi innovation. Luc FalempinCEO Tokeny About AMA Resources and AMBIOGEO

AMA Resources is a dynamic corporation headquartered in Florida (US). It holds ownership of nineteen properties, totaling approximately 22,101 hectares (equivalent to 54,613 acres). In addition, it holds several gold and copper concessions in Argentina. These strategically located properties boast proximity to essential infrastructure, including water, electricity, and transportation access via land, rail, or sea. Its core focus lies in exploration financing, leveraging the innovative concept of tokenization.

AMBIOGEO is a mining company committed to sustainability and social responsibility, located in Parnamirim, Rio Grande do Norte, but active in all regions of Brazil. The company operates mainly in various professional, scientific and technical activities, with a significant focus on environmental and geological consulting.

By harnessing blockchain technology, they aim to transform proven mineral reserves into tradable digital assets. Their mission: to unlock value, enhance liquidity, and empower investors in the natural resources landscape.

About Tokeny

Tokeny is a leading onchain finance operating system. Tokeny has pioneered compliant tokenization with the open-source ERC-3643 standard and advanced white-label software solutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post AMA-AMBIOGEO Tokenizes $4.6 Billion Gold Reserves with Tokeny appeared first on Tokeny.


auth0

Level Up: Auth0 Plans Just Got an Upgrade

We’ve leveled up our Free, Essential, and Professional plans.
We’ve leveled up our Free, Essential, and Professional plans.

KuppingerCole

Understanding the Opposition

by Anne Bailey What to do now to prepare for the future Earlier this year, KuppingerCole published Strategic Cybersecurity Recommendations for 2024-2033. Analysts at KuppingerCole conducted scenario-based research on the most critical trends, risks, and opportunities of the next ten years, which yielded the recommendations we present in the paper. One of the recommendations we make is to know

by Anne Bailey

What to do now to prepare for the future

Earlier this year, KuppingerCole published Strategic Cybersecurity Recommendations for 2024-2033. Analysts at KuppingerCole conducted scenario-based research on the most critical trends, risks, and opportunities of the next ten years, which yielded the recommendations we present in the paper.

One of the recommendations we make is to know the opposition.

Know the Opposition

The paper identified a range of threats which must be first identified before effective mitigation action can be taken. Taken at a geopolitical level, different countries and regions will have different patterns of development over the next ten years, some taking more protectionist stances and others open and collaborative, with of course many varieties in between. These different environments foster different types of economic development... and crime.

Businesses operating in each environment must strive to understand the malicious actors that thrive in that environment, as well as their motivations. Are the conditions right for lone wolf attacks, state-sponsored attacks, or even corporate-on-corporate attacks? Are they seeking financial gain, disruption, or influence? The answers to these questions should help shape a unique defense strategy.

How to Know

Chief Information Security Officers (CISOs) must know the opposition and should seek to do so by understanding the environment and context that cause malicious actors to attack. There are of course many ways to do this. We recommend having incident response plan(s) that address the evolving threats and threat actors, and scenario planning the threats that are particular to your region, industry, and business.

One place to do that is at cyberevolution in Frankfurt, Germany in December this year. There is a track on understanding the opposition, covering quantum threats, threat intelligence, business models behind common attacks, and much more. Take proactive steps to understand the threats by joining the cyberevolution.

Tuesday, 24. September 2024

SC Media - Identity and Access

Authentication and Authorization in the AI Era - Shiven Ramji - BSW #365


Microsoft Entra (Azure AD) Blog

Move to cloud authentication with the AD FS migration tool!

We’re excited to announce that the migration tool for Active Directory Federation Service (AD FS) customers to move their apps to Microsoft Entra ID is now generally available! Existing customers can begin updating their identity management with more extensive monitoring and security infrastructure by quickly identifying which applications are capable of being migrated and assessing all

We’re excited to announce that the migration tool for Active Directory Federation Service (AD FS) customers to move their apps to Microsoft Entra ID is now generally available! Existing customers can begin updating their identity management with more extensive monitoring and security infrastructure by quickly identifying which applications are capable of being migrated and assessing all their AD FS applications for compatibility. If you don't have an Entra ID account, you can still access the Migrate AD FS to Microsoft Entra ID guide to see what a migration would look like for your organization.

 

In November we announced AD FS Application Migration would be moving to public preview, and the response from our partners and customers has been overwhelmingly positive. For some, transitioning to cloud-based security is a daunting task, but the tool has proven to dramatically streamline the process of moving to Microsoft Entra ID. 

 

A simplified workflow, reduced need for manual intervention, and minimized downtime (for applications and end users) have reduced stress for hassle-free migrations. The tool not only checks the compatibility of your applications with Entra ID, but it can also suggest how to resolve any issues. It then monitors the migration progress and reflects the latest changes in your applications. Watch the demo to see the tool in action.

Moving from AD FS to a more agile and responsive, cloud-native solution helps overcome some of the inherent limitations of the old way of managing identities.

 

In addition to more robust security, organizations count greater visibility and control with a centralized, intuitive admin center and reduced server costs as transformative benefits of moving to a modern identity management. Moreover, Entra ID features can help organizations achieve better security and compliance with multifactor authentication (MFA) and conditional access policies—both of which provide a critical foundation for Zero Trust strategy.  

 

More Entra ID features include:

Passwordless and MFA for better user experience. A rich set of apps, APIs, SDKs, and connectors for customization and extensibility. Granular adaptive access controls to define and monitor conditional access. Self-service portals that allow employees to securely manage their own identity.

 

Want to learn more about Microsoft Entra? Get the datasheet and take a tour here. Ready to get started? Visit Microsoft Learn and explore our detailed AD FS Application Migration guide. 

 

Have any questions or feedback? Let us know here.  

 

Melanie Maynes

Director of Product Marketing

 

 

For a comprehensive overview of the migration tool and its capabilities, check out these other resources:

Overview of AD FS application migration - Microsoft Entra ID | Microsoft Learn Use the AD FS application migration to move AD FS apps to Microsoft Entra ID - Microsoft Entra ID | Microsoft Learn Demo: Effortless Application Migration Using Microsoft Entra ID | OD03 (youtube.com)  Best practices to migrate applications and authentication to Microsoft Entra ID - Microsoft Entra | Microsoft Learn Customer Case Study: Microsoft Customer Story-Universidad de Las Palmas de Gran Canaria boosts accessibility with Microsoft Entra ID  

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

Explore the key benefits of Microsoft Entra Private Access

The traditional network security models are becoming increasingly ineffective in a world where remote work and cloud services are the norm. Conventional technologies like VPNs, while popular, offer limited protection in a boundary-less landscape, typically granting users excessive network access and posing significant risks. If compromised, these can lead to unauthorized access and potentially lat

The traditional network security models are becoming increasingly ineffective in a world where remote work and cloud services are the norm. Conventional technologies like VPNs, while popular, offer limited protection in a boundary-less landscape, typically granting users excessive network access and posing significant risks. If compromised, these can lead to unauthorized access and potentially lateral movement within corporate networks, exposing sensitive data and resources. Microsoft Entra Private Access is at the forefront of addressing these challenges by effectively integrating identity and network access controls.

 

Microsoft Entra Private Access

 

In July we announced general availability of Microsoft Entra Suite, which brings together identity and network access controls to secure access to any cloud or on-premises application or resource from any location. We also announced Microsoft’s Security Service Edge (SSE) solution general availability. Microsoft Entra Private Access, a core component of Microsoft’s SSE solution, allows you to replace your VPN with an identity-centric Zero Trust Network Access (ZTNA) solution to securely connect users to any private resource and application without exposing full network access to all resources. It’s built on Zero Trust principles to protect against cyber threats and mitigate lateral movement. Through Microsoft’s global private network, give your users a fast, seamless, edge-accelerated access experience that balances security with productivity.

 

Figure 1: Secure access to all private applications, for users anywhere, with an identity centric ZTNA

 

Modernize access to private applications

 

Despite the cloud’s growing dominance, you may still rely on on-premises infrastructure and use legacy VPNs to enable your remote workforce. Legacy VPNs typically grant excessive access to the entire network by making the remote user’s device part of your network.

 

Figure 2: Legacy VPNs typically grant excessive access to the entire network

 

Microsoft Entra Private Access helps you easily start retiring your legacy VPN and level up to an identity-centric ZTNA solution that helps reduce your attack surface, mitigates lateral threat movement, and removes unnecessary operational complexity for your IT teams. Unlike traditional VPNs, Microsoft Entra Private Access protects access to your network for all your users— whether they are remote or local, and accessing any legacy, custom, modern, or private apps that are on-premises or on any cloud.

 

Figure 3: Replace legacy VPN with an identity centric ZTNA solution

 

For example, Microsoft Entra Private Access enhances security for Remote Desktop Protocol (RDP) sessions by enabling access without direct network connectivity. It leverages Conditional Access policies, including multifactor authentication (MFA), to validate both device and user identities. This ensures that only authenticated users with compliant devices can establish an RDP session on your network, providing a secure and seamless remote access experience. By integrating with Microsoft Entra ID, Microsoft Entra Private Access validates access tokens and connects users to the appropriate private server, reinforcing the security posture without the need for traditional VPN solutions.

 

 

Accelerate your journey to Zero Trust with Microsoft Entra Private Access

 

Microsoft Entra Private Access helps you accelerate your journey to ZTNA and meets this need by offering a streamlined approach to help enforce least privilege access to on-premises or private applications, reinforcing the importance of extending Zero Trust principles to any private app(s) or resource(s), regardless of their location — on-premises or any cloud.

 

Figure 5: Accelerate your ZTNA journey with Microsoft Entra Private Access

 

Here, in more detail, are the key capabilities that help you move from legacy VPNs to ZTNA:

 

QuickAccess policy simplifies transitioning from legacy VPNs to easily onboard with Microsoft Entra Private Access. It allows you to create network segments that can include multiple apps and resources.

 

Figure 6: Fast and easy migration from legacy VPNs with Quick Access policy

 

Over time, Private Application Discovery enables you to discover all your private apps, onboard them to enable segmented access, and simplify enabling the creation of Conditional Access policies for groups of apps based on business impact levels.

 

Figure 7: Automatic private application discovery and onboarding

 

Enforce Conditional Access across all private resources

 

To enhance your security posture and minimize the attack surface, it’s crucial to implement robust Conditional Access controls, such as MFA (biometric and/or phish resistant), across all private resources and applications including legacy or proprietary applications that may not support modern identity.

 

The familiar Conditional Access policies used today can now be extended to all private apps, including legacy apps and non-web resources, such as RDP, SSH, SMB, SAP, or any other TCP- or UDP-based private application, resource, or network endpoint.

 

Figure 8: Enforce Conditional Access across all private resources

 

Conditional Access is applied to every network flow, ensuring comprehensive security coverage across all your private apps and resources—including MFA, location-based security, advanced segmentation, and adaptive least-privilege access policies—without making any changes to your apps or resources.

 

 

Deliver seamless access to private apps and resources with single sign-on

 

Single sign-on (SSO) simplifies the user experience by eliminating the need to sign in to each private application individually. By enabling SSO, users gain seamless access to all necessary private applications, whether located on-premises or across various clouds, without the need for repeated authentication or modifications to existing apps.

 

Microsoft Entra Private Access further streamlines this process by providing SSO for on-premises resources, utilizing Kerberos for secure, ticket-based authentication. For an even more integrated experience, you can opt to implement Windows Hello for Business with cloud Kerberos trust, offering a modern, passwordless sign-on option for users. This cohesive approach to SSO, supported by Microsoft Entra Private Access, ensures a secure and efficient access management system for private resources across the enterprise landscape.

 

Deploy across various platforms, ports, and protocols

 

Enable secure connectivity to private resources from Windows and Android, with support for iOS and MacOS coming later this year, and Linux support to follow. This service spans all operating systems and accommodates any port and protocol, including SMB, RDP, FTP, SSH, SAP, printing, and all other TCP/UDP-based protocols. For security teams already using an Application Proxy, you can seamlessly and confidently transition to Microsoft Entra Private Access knowing that all existing use cases and access to existing private web applications will keep working with no disruption.

 

 

Securing just-in-time access to sensitive resources

 

Microsoft Entra Private Access tightly integrated with Privileged Identity Management (PIM), a service within Microsoft Entra ID Governance, helps you secure just-in-time access to private resources for privileged users. This integration ensures that privileged access is granted only when necessary, aligning with the Zero Trust principle of least privilege access. It allows for the enforcement of robust Conditional Access controls such as MFA, to ensure that only eligible and validated users can access sensitive resources. This approach not only enhances security but also supports compliance and auditing requirements by providing detailed tracking and logging of privileged access requests.

 

Secure access to Azure managed services with Microsoft Entra Private Access

 

Azure offers many managed services, such as Azure SQL, Azure Storage, and Azure ML, among others. Microsoft Entra Private Access ensures a secure, private connection to Azure services while enforcing security policies and posture during access, allowing you enforce Conditional Access controls such as MFA and IP-based access controls. With comprehensive enforcement of identity and network access controls, Microsoft Entra Private Access ensures that managed services are accessed securely. Here are two key scenarios:

 

Secure Azure managed services access: Typically, Azure services are accessed over the internet. However, for security reasons, it’s preferable to keep the traffic between users or applications and Azure services private, avoiding exposure to the internet. This can be achieved through Microsoft Entra Private Access, where services like Azure Storage can be connected to a virtual network (vNet) using Private Link. This ensures that all traffic remains private, while additional identity and network access controls are enforced.

Figure 11: Enable secure access to Azure Storage with Private Access through Private Link

 

Service endpoint for controlled access: In contrast to Private Link, the service endpoint method does not integrate services into a vNet. Instead, it restricts incoming traffic to connections from specified connector IP addresses through Microsoft Entra Private Access. This approach helps secure access to Azure services by permitting access solely through an approved path, where additional security measures like MFA and device posture can be enforced.

Figure 12: Ensures a single, secure path to the Azure managed services through Microsoft Entra Private Access

 

Simplify Microsoft Entra private network connector  deployment for your private workloads

 

In addition to Microsoft Entra admin center, private network connector is now available on Azure Marketplace and AWS Marketplace in preview. This will allow users to easily deploy a virtual machine with a pre-installed Private Access Connector through a streamlined managed model for Azure and AWS Workloads. The Marketplace offerings automate the installation and registration process, simplifying authentication setup, thus enhancing user experience.

 

Figure 13: Microsoft Entra private network connector on Microsoft Azure Marketplace

 

Figure 14: Microsoft Entra private network connector on AWS Marketplace

 

The Microsoft Entra private network connector is a required software component to enable Microsoft Entra Private Access. It sits alongside customers’ private applications in customer network and is designed to provide secure and convenient access to them from any device and location. It acts as a bridge between Microsoft’s SSE edge and application servers, facilitating the authentication, authorization, and encryption of traffic.

 

Enable edge accelerated Zero Trust private domain name resolution

 

Microsoft Entra Private Access enhances your organization’s domain name resolution (DNS) capabilities and simplifies the process of accessing IP-based app segments and private resources using FQDNs, allowing your users to access private resources with single label names or hostnames without complex configurations. With accelerated DNS at Microsoft’s SSE edge , DNS responses are cached, leading to significantly faster resolution times and enhanced performance. Moreover, the integration of DNS with Conditional Access adds an extra layer of identity-centric security controls, allowing for more granular control over access to private resources.

 

For instance, with Private DNS support, you can provide your domain suffixes to simplify Zero Trust access to private apps using FQDNs, streamlining the connection process to internal resources, while using your existing DNS deployments. This is particularly beneficial in scenarios where your users need to seamlessly access private resources without the need for VPNs or domain-joined devices, while offering a more secure and efficient way to manage access.

 

Simplify access and improve end user experience at a global scale

 

Enhance user productivity by leveraging Microsoft’s vast global edge presence, providing fast and easy access to private apps and resources—located on-premises, on private data centers, and across any cloud. Users benefit from optimized traffic routing through the closest worldwide Point of Presence (PoP), reducing latency for a consistently swift hybrid work experience.

 

Deploy side-by-side with third-party network access solutions

 

A distinctive feature of Microsoft’s SSE solution is its built-in compatibility with third-party network access solutions where it allows you only acquire the traffic you need to send to Microsoft’s SSE edges. Leverage Microsoft and third-party network access solutions in a unified environment to harness a robust set of capabilities from both solutions to accelerate your Zero Trust journey. The flexible deployment options by Microsoft’s SSE solution empowers you with enhanced security and seamless connectivity for optimal user experience.

 

Conclusion

 

Simplifying and securing access for your hybrid workforce is crucial in a landscape where traditional boundaries have dissolved. Enforcing least-privilege access and minimizing reliance on legacy tools like VPNs are essential steps in reducing risk and mitigating sophisticated cyberattacks.

 

Microsoft Entra Private Access helps you secure access to all your private apps and resources for users anywhere with an identity-centric ZTNA solution. It allows you to replace your legacy VPN with ZTNA to securely connect users to any private resource and application without exposing full network access to all resources.

 

The unified approach across identity and network access within Microsoft’s SSE solution signifies a new era of network security. This approach ensures that only authorized users are authenticated, and their devices are compliant before accessing private resources.

 

Learn More

 

To get started, begin a trial to explore Microsoft Entra Private Access general availability. You can also sign up for an Entra suite trial, which includes Microsoft Entra Private Access. For further help contact a Microsoft sales representative and share your feedback to help us make this solution even better.

 

Ashish Jain, Principal Group Product Manager

Abdi Saeedabadi, Senior Product Marketing Manager

 

Read more on this topic

Microsoft Entra Private Access Microsoft Security Service Edge now generally available  Simplify your Zero Trust strategy with the Microsoft Entra Suite and unified security operations platform, now generally available Watch Zero Trust spotlight webcast Watch Microsoft Entra Private Access tech accelerator webinar Get started and try Microsoft Entra Private Access Get started and try Microsoft Entra Internet Access Get started and try Entra suite products

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra Internet Access Microsoft Entra News and Insights | Microsoft Security Blog ⁠⁠Microsoft Entra blog | Tech Community ⁠Microsoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community

Spruce Systems

How Personal Data Licenses Can Keep Digital Identity Private

How digital identity can give you total control of your sensitive data.

The world is in the early stages of supplementing old-school paper identity documents with digitally secured identification, licensing, and other credentials. This major technological and infrastructure shift offers big benefits in privacy, security, and convenience for everyday people.

Digital identity has the potential to vastly improve your control over your personal data. Already, many verifiable digital credential (VDC) formats support a feature known as “selective disclosure,” which lets users choose exactly what data fields they hand over during a verification. 

We can go even further to give users broader, long-term control over data sharing of their information, including the ability to closely monitor who has permission to use it–and even to exercise their right to have their data deleted with the tap of a button.

For example, millions of Americans today spend countless hours on phone calls and dust off the fax machines to send information across primary care physicians and healthcare specialists. We could vastly improve the efficiency of electronic health record systems and the patient experience by describing handling rules for a patient’s protected health information (PHI) in human and machine-readable format called a “personal data license,” which is digitally signed by the patient.

A blood test result, for instance, is shared to a patient’s primary care physician (PCP) along with a new personal data license, which describes that the test results may be stored for up to 5 years across all entities and is shareable with their cardiologist without the patient needing to fill out any additional forms.

After 5 years, or when the patient decides to revoke the personal data license with a tap in their app, the data would need to be deleted under the HIPAA privacy framework. The patient could also update the personal data license to allow for other counterparties to also receive the data, or extend the sharing duration. Depending on the reporting requirements described in the license, the patient could also track when, where, and to whom their PHI was shared further.

We call this kind of system “Personal Data Licensing,” and it can work not only with health records but also with digital identity, professional credentials, and anything of value that is paper or plastic today but will be digital tomorrow. Making it a reality will involve technology working hand in hand with privacy-focused public policy.

If you haven't already, subscribe to our blog to stay tuned for part 2 on this topic, where we will describe in detail how it works in practice.

Subscribe Now

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


SC Media - Identity and Access

MC2 Data leak exposes nearly a third of US population

The misconfiguration revealed more than 106 million records with U.S. citizens' private information and over 2.3 million MC2 Data subscribers' data.

The misconfiguration revealed more than 106 million records with U.S. citizens' private information and over 2.3 million MC2 Data subscribers' data.


Metadium

Partnership with WEB2X — Web3 Development Services

Partnership with WEB2X — Web3 Development Services Partnership with WEB2X — Web3 Development Services Dear Community, We are excited to announce our partnership with WEB2X, a Web3 development service provider. WEB2X enables companies to easily build Web3 services without the need for developers or complex infrastructure — just by connecting to its API. This collaboration will significantly stre
Partnership with WEB2X — Web3 Development Services

Partnership with WEB2X — Web3 Development Services

Dear Community,

We are excited to announce our partnership with WEB2X, a Web3 development service provider. WEB2X enables companies to easily build Web3 services without the need for developers or complex infrastructure — just by connecting to its API. This collaboration will significantly streamline the transition to Web3 for companies preparing to enter the space.

Metadium, with its superior DID technology, will be included in WEB2X’s infrastructure as part of this partnership. Additionally, WEB2X will provide partial gas fee support to companies that launch their services through the WEB2X platform.

Here are some of the key features of WEB2X:

📍 AUTH: Create and link blockchain accounts with just a passkey while maintaining existing service procedures.

📍 Transaction: Generate and execute blockchain transactions using APIs without additional training in blockchain development languages.

📍 Functions: Easily access data generated on the blockchain and integrate it with existing services through automation.

📍 Oracle: Connect it with blockchain to ensure the reliability of external data such as exchange rates, stock prices, and identity information.

📍 VRF: Provide tamper-proof random data through a verifiable random function (VRF), with all history recorded and verified on the blockchain and accessible via API.

📍 CCMP: Enable cross-chain message exchange, allowing compatibility between different blockchains.

WEB2X officially launched eight product types: tickets, digital photocards, vouchers, season passes, memberships, coupons, commemorative badges, and certificates on 20th August. The platform is continuously updating with more products.

You can now experience Web3 development with WEB2X by using its “Try It” feature, which offers a hands-on trial in under 30 seconds with just a few clicks. For more information, please check the links below:

🔗 Try WEB2X: https://web2x.io/event

🔗 WEB2X Official Site: https://web2x.io/

We at Metadium hope this partnership with WEB2X will be a stepping stone for more builders to join the Metadium ecosystem effortlessly. We look forward to your interest and participation.

Metadium Team Metadium, 웹3 구축서비스 WEB2X와의 파트너쉽

안녕하세요. 메타디움 커뮤니티 여러분!

웹3 서비스 구축 서비스 WEB2X와 메타디움의 파트너쉽을 전하게 되어 기쁘게 생각합니다. WEB2X는 웹3 개발자와 인프라 없이 API연결만으로 웹3 서비스를 구축할 수 있는 웹3 구축 서비스로 많은 기업들과 웹3 사업을 준비하는 기업들의 웹3 전환을 더욱 쉽고 빠르게 해줄 것입니다. 이번 파트너쉽을 통해 WEB2X의 인프라 항목에 DID기술의 우수성을 인정받은 메타디움이 포함될 것이며, WEB2X측에서 WEB2X를 통해 서비스를 오픈하는 기업들에게 가스 수수료의 일부를 지원할 예정입니다.

WEB2X의 주요 기능은 다음과 같습니다.

📍AUTH : 기존 서비스 이용절차를 유지하면서 패스키만으로 블록체인 계정을 생성 및 연동

📍Transaction : 블록체인 개발 언어 등의 추가 학습 과정 없이 API를 이용하여 블록체인 트랜잭션 생성 및 수행

📍Functions : 블록체인과 기존 서비스를 넘나드는 자동화 구현으로, 블록체인상에서 발생한 데이터를 사용하던 기존 서비스를 쉽게 확인 가능

📍Oracle : 환율, 주가, 신원정보 등의 데이터를 블록체인과 연동하여 블록체인상에서 발생하는 외부데이터에 대한 신뢰성을 확보

📍VRF : 모든 이력이 블록체인에 기록 및 검증되고 API로 쉽게 랜덤 추출가능하여 조작불가능한 랜덤 데이터 제공

📍CCMP : 크로스체인 기반의 메시지 교환이 가능하여 이종체인간의 상호호환 가능

WEB2X는 상기의 기능들이 포함된 입장권, 디지털 포토카드, 교환권, 시즌권, 멤버쉽, 쿠폰, 기념 배지, 증명서 8종의 상품과 함께 지난 8월20일 정식오픈하였으며, 현재 다양한 상품을 지속적으로 업데이트할 예정입니다.

현재 클릭 몇번과 30초 이내의 시간에 WEB2X를 통한 WEB3 개발을 체험해 볼 수 있는 ‘체험하기’ 기능을 제공하고 있으니, WEB2X 체험이나 더 많은 정보를 원하시는 분들은 아래의 링크를 참고 부탁드립니다.

🔗 WEB2X 체험하기 : https://web2x.io/event

🔗WEB2X Official Site : https://web2x.io/

메타디움은 이번 WEB2X와의 파트너쉽이 더 많은 빌더들이 부담없이 메타디움 생태계에 진입하는 시작점이 되길 기대하며, 많은 커뮤니티분들의 관심과 이용 부탁드립니다.

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Partnership with WEB2X — Web3 Development Services was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 22. September 2024

KuppingerCole

Flexibility and Adaptability are Key: Identity Fabric 2025

In this episode, Matthias Reinwarth discusses the updates to the Identity Fabric and IAM reference architecture with Dr. Philipp Messerschmidt and Martin Kuppinger. The Identity Fabric is a holistic concept that provides seamless yet secure access to every type of identity for every type of service. The update to the Identity Fabric is necessary to reflect the developments in the IAM world, such a

In this episode, Matthias Reinwarth discusses the updates to the Identity Fabric and IAM reference architecture with Dr. Philipp Messerschmidt and Martin Kuppinger. The Identity Fabric is a holistic concept that provides seamless yet secure access to every type of identity for every type of service. The update to the Identity Fabric is necessary to reflect the developments in the IAM world, such as new trends in authorization and authentication.

The IAM reference architecture provides more detail and functional capabilities for each pillar of IAM. The update also includes the addition of new identity types and the inclusion of architectural concepts like microservice architectures and identity API layers. The Identity Fabric 2025 will be flexible and adaptable to future trends and challenges in IAM.



Friday, 20. September 2024

Spherical Cow Consulting

The Wallets Are Coming – But Are We Ready for What’s Next?

As people like John Bradley and Shannon Roddy have noted in their conference talks earlier this year, the wonderful world of wallets is about to experience some of the growing pains that “traditional” identity federations have been dealing with for decades. When I say traditional, I’m talking about the SAML-based bilateral and multilateral federations that… Continue reading The Wallets Are Coming

As people like John Bradley and Shannon Roddy have noted in their conference talks earlier this year, the wonderful world of wallets is about to experience some of the growing pains that “traditional” identity federations have been dealing with for decades. When I say traditional, I’m talking about the SAML-based bilateral and multilateral federations that have dominated the Research and Education (R&E) space for years. These federations have served as the backbone for secure access in academic and research settings, but here’s the kicker – the lessons learned from that world aren’t making their way into the commercial or enterprise space.

And we should be concerned about that.

Growing Pains, Version 2.0

Why? Because wallets are about to hit the same hurdles that identity federations have been jumping (or tripping over) for a long time. Things like trust frameworks, governance, funding models, and the thorny question of how to manage identity at scale are all about to come knocking. It’s one thing to issue verifiable credentials, but it’s a whole other beast to manage them securely and efficiently across borders, organizations, and systems.

R&E federations have been there, done that, got the t-shirt, and are still trying to figure out if the t-shirt fits. Or if its even still wearable.

The Disconnect Between Worlds

Here’s where things get tricky: despite the similarities, the experience of the R&E sector – where we find the largest and most active identity federations in the world – isn’t translating to the commercial or enterprise space. The enterprise world, which is now buzzing about digital wallets and verifiable credentials, seems to be missing the memo on the challenges of running federations.

What I tend to hear is “eh, that’s SAML-based. SAML is dead, didn’t you know? The R&E experience couldn’t possibly be relevant to my Very Special use case (that happens to look like a thousand other use cases.”

Identity federation isn’t just about the technology. It’s about building trust between organizations, having a solid governance framework, and making sure there’s a sustainable model to keep the lights on. And right now, most commercial ventures diving into wallets are focusing more on the tech (which, let’s face it, is the shiny part) and less on the less-glamorous, but critical, infrastructure.

The R&E Space is Tired – And Underfunded

Now, let’s talk about the state of the R&E world. It’s tired. Federations are not only underfunded but often stretched to their breaking points. Out of the 76 federations we know about globally, maybe five have the funding and resources to do the innovative work that’s needed to stay ahead. The rest? They’re just trying to keep things running, dealing with legacy systems, and clinging to SAML because it’s the devil they know.

But the reality is that while the federations in R&E have the most experience in dealing with identity at scale, they’re not in a position to help the commercial world solve its wallet issues. And frankly, many of them are struggling to stay relevant as the world moves toward OpenID Connect and verifiable credentials.

What Comes Next?

So, what does this mean for the future of wallets and federations? Well, the commercial world is about to discover that managing digital identity is more than just fancy tech. It’s about trust, governance, and sustainability – all things that the R&E space has been grappling with for years.

The challenge is that while the R&E world has valuable lessons to offer, it may not have the capacity or energy to lead the charge into this next era of digital identity. And without a more collaborative approach that brings together the best of both worlds, we could see a lot of wheel reinvention – and a lot of avoidable mistakes.

In short: the wallets are coming, but are we ready for what’s next? That’s the real question.

If you’re interested in learning more about navigating this process or need support in engaging with standards development, don’t hesitate to reach out. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post The Wallets Are Coming – But Are We Ready for What’s Next? appeared first on Spherical Cow Consulting.


Transmute TechTalk

The Catalyst for the Much-Needed E-Commerce Evolution: Verifiable Credentials

E-Commerce has gripped the global market, but enhanced consumer access to cheap goods comes at a hefty risk for brands. The apparel market has particularly been infiltrated, accounting for nearly 15% of all seized counterfeit products (OECD). The rise in counterfeit products can be attributed to an increasing rate of de minimis shipments, valued at $800 or less, which have put increasing str

E-Commerce has gripped the global market, but enhanced consumer access to cheap goods comes at a hefty risk for brands. The apparel market has particularly been infiltrated, accounting for nearly 15% of all seized counterfeit products (OECD).

The rise in counterfeit products can be attributed to an increasing rate of de minimis shipments, valued at $800 or less, which have put increasing strain on US Customs and Border Protection operations and made it difficult for CBP officials to detect and intercept counterfeit goods. The sheer volume of these shipments — totaling 1 billion in 2023 according to CBP — makes manual inspection virtually impossible.

Transmute has spent five years working with CBP and DHS to introduce a powerful solution to the future of trade: verifiable credentials. By transforming traditional trade documents into these wholly digitized twins, we can trace each shipment’s data trail from the original source all the way down the supply chain. Cryptographically signed and secured data provides unprecedented insights for CBP, empowering them to automatically verify shipment authenticity and origin.

Verifiable credentials within the supply chain will enable brands and consumers to build trust, protecting intellectual property and mitigating financial risks. By facilitating the sharing of trustworthy data earlier in the process, CBP can vet trusted trade actors and focus their efforts on suspicious shipments.

Transmute’s apparel use case will be officially tested in a commercial and operational setting in November 2024 during the E-Commerce Tech Demo. Parties interested in observing how the data “pipe” to CBP will operate may submit their interest here.

The flow Transmute presents is relatively straightforward, but here’s a visual of the process:

Transmute’s E-Commerce Workflow Diagram. Blue pencils represent credential issuance, and green checks represent credential recipience.

The process kicks off with GS1’s Global Office issuance of a GS1 Prefix License credential, then shared with the GS1 US member organization, who creates a GS1 Company Prefix License credential and assigns it to a brand owner. This organization owns the creative rights and intellectual property to a brand owner; think Disney or Skechers. As a trust anchor, GS1 recognizes the brand’s authority over its IP, establishing it as a creditable, vetted organization with the power to grant other parties contracted allowances to create and sell products using its brand name.

Extending the trust established by GS1, brand owners are then able to issue several documents that assert they’ve done business with a third-party retailer (for E-Commerce use cases, these are online sellers), and the seller’s goods are permissible, rather than counterfeit. Traditionally, this contracted relationship is nearly impossible for CBP to discern independently, but with GS1 and brand owners issuing credentials that establish the third-party as a legitimate actor, CBP doesn’t need to seize the goods and verify their authenticity.

Issuing documents that recognize a retailer’s legitimacy in the form of a verifiable credential allows brand owners to maintain greater authority over their brand and goods. They retain the power to revoke the credentials at any time, whether as a result of malintent or contract expiration. If goods presented to CBP contain a revoked credential, officials will automatically be notified of the “broken” trust chain and seize the package.

Verifiable credentials serve online sellers as well. When the retailer has been verified as a trusted party, they can submit standard import documents (commercial invoice, packing list, etc.) to both logistics providers and CBP as soon as the data becomes available — sometimes as soon as a customer has placed the order. Pre-arrival data reduces delays at the border, which often result in hefty fines and consumer dissatisfaction. Brands and sellers who submit pre-arrival data in the verifiable credential format deepens CBP’s trust in them, with expanded capacity to target suspicious goods.

The ability to reconcile and visualize interconnected data points is (perhaps surprisingly) unprecedented. Today, officials are required to manually connect data elements, coding relationships one data element at a time to generate a comprehensive issue of the data, its validity, and its issuers. You can imagine how redundant and inefficient this process is, and the toll it takes not only on CBP officials, but also on brand owners eager for a way to confidently map the path their products — and data — follow along a supply chain.

Verifiable credentials issued through Transmute’s Platform represent a unique opportunity for organizations and CBP to automatically render a visual representation of the data in the form of a “trust graph.” The data is “living,” meaning the chain will be automatically broken if there’s an attempt to tamper with the data or if the contract enabling a seller’s temporary use of a brand’s IP expires. It’s also “smart” — officials can query the graph, specifically seeking to comprehend the relationships between the supply chain actors, and gauge their authenticity. If an online seller fails to demonstrate a chain of custody that extends to a trust actor — like GS1 — the gap will be immediately obvious and won’t require custom assembly of the data.

Verifiable credentials and trust graphs represent a new era for global supply chains — one built on trust, transparency, and dynamic data.

You can find Transmute’s E-Commerce and Steel Workflow Diagrams on https://platform.transmute.industries/workflows/definitions.

Transmute is committed to digitizing supply global chains, applying modern cryptography standards for more efficient, automated, and safer cross-border trade.
Sign up for free now on https://platform.transmute.industries.

Ava Tusek
Operations Manager
Transmute
https://platform.transmute.industries

The Catalyst for the Much-Needed E-Commerce Evolution: Verifiable Credentials was originally published in Transmute on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Future-Proofing Your Identity Systems: What SAP’s IDM Sunset Means for Your Organization

by Matthias Reinwarth With SAP announcing the end of maintenance for its Identity Management (IDM) system by 2027 and extending support through 2030, organizations using on-premises identity governance systems face a critical decision. While this may seem like ample time, replacing an Identity Governance and Administration (IGA) solution is a complex and often lengthy process that can take severa

by Matthias Reinwarth

With SAP announcing the end of maintenance for its Identity Management (IDM) system by 2027 and extending support through 2030, organizations using on-premises identity governance systems face a critical decision. While this may seem like ample time, replacing an Identity Governance and Administration (IGA) solution is a complex and often lengthy process that can take several years to complete. Organizations must begin planning now to avoid rushed decisions and potential disruptions.

A Complex Transition Ahead

Replacing an IGA system is far more than a simple technical upgrade. These systems are deeply embedded in user lifecycle management, provisioning, and access governance, and swapping them out can be challenging. On average, replacing an IGA system takes at least three years, to the need for thorough planning, process alignment, and system integration. The decisions made today will affect organizations for decades to come, making it critical to consider future requirements rather than merely replicating existing systems with newer tools.

Rethinking IAM for the Future

The end of SAP’s IDM system provides an opportunity to reimagine how Identity and Access Management (IAM) should be designed in the future. Rather than focusing on a like-for-like replacement, organizations should take a strategic approach, considering how identity governance will evolve in a hybrid IT environment. Modular, flexible architectures – especially those based on the KuppingerCole Identity Fabric - can provide the adaptability needed to address evolving security, governance, and access management challenges in hybrid environments.

Regulatory Pressure and Hybrid Complexity

The regulatory environment around identity management has become increasingly complex, and organizations must now comply with stricter access governance requirements. Hybrid IT setups, combining on-premises systems with cloud services, complicate the landscape. Many organizations already run multiple identity management systems - one for on-premises applications and another for cloud services - leading to integration headaches. However, this challenge also presents an opportunity to streamline identity governance processes and modernize outdated systems.

Efficiency Through Automation

One key lesson from traditional IGA implementations is the need for greater automation. Manual processes, such as cumbersome recertification workflows and role management, often reduce efficiency and increase the risk of errors. Modern IGA solutions should prioritize automation to handle provisioning and governance tasks more effectively. Over-customization has been a frequent issue with legacy IGA systems, leading to complex environments that are difficult to update and maintain. Reducing customization in favor of standardized, scalable solutions can simplify future upgrades and lower long-term maintenance costs.

Exploring Alternatives: Cloud and Hybrid Approaches

With SAP shifting its focus toward cloud-based identity services, organizations must evaluate the potential of cloud IGA solutions. Both SAP Cloud Identity Services and Microsoft Entra ID Governance services might offer viable alternatives to on-premises IDM systems, but a one-size-fits-all approach is rarely the answer. Each organization has unique needs based on factors like regulatory requirements, business size, and complexity. Conducting a comprehensive requirements analysis is essential before selecting a tool, ensuring it aligns with long-term strategic goals.

Holistic Planning for a Future-Ready IGA

The replacement of an IGA system isn't just a technical exercise. It requires a holistic rethinking of processes such as policy enforcement, role models, and integration with risk management solutions. The cost of such projects goes beyond licensing fees, as implementation can be six to ten times higher than subscription costs alone. Therefore, a thorough approach to process reviews, tool selection, and planning will pay dividends, reducing the risk of costly rework or operational inefficiencies.

The Financial Impact of IGA System Replacement

Replacing an IGA system is a significant financial commitment, especially with the shift toward subscription-based models. However, organizations that carefully plan and choose the right solutions will see long-term benefits in terms of compliance, operational efficiency, and security. Investing in the right identity governance infrastructure now will ensure that future regulatory and technological challenges are met with agility.

Time to Act

The end-of-life announcement for SAP's IDM system should serve as a wake-up call for organizations still reliant on traditional on-premises identity systems. The clock is ticking, and the time to start planning is now. By conducting a thorough analysis of current and future requirements, avoiding over-customization, and embracing automation, organizations can ensure they are well-prepared for the evolving world of identity governance and access management. The future of identity governance lies in flexible, scalable solutions that integrate seamlessly with hybrid IT environments - don't wait until 2027 to start the journey.


Northern Block

Announcing Our Strategic Partnership with Digital Governance Institute

Northern Block partners with Digital Governance Institute (DGI) to deliver joint governance consulting services and Trust Registry Infrastructure. The post Announcing Our Strategic Partnership with Digital Governance Institute appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Announcing Our Strategic Partnership with Digital Governance Institute appeared fi

We are excited to announce that Northern Block has entered into a strategic partnership with Digital Governance Institute (DGI) to offer joint governance consulting services and product solutions to the digital trust ecosystem. Together, we aim to combine Northern Block’s Trust Registry Infrastructure as a Service (IaaS) with DGI’s renowned ecosystem network governance services, creating a solution that provides both robust technical infrastructure and transparent, accountable governance frameworks.

Why We’re Doing This

The demand for secure, verifiable digital trust infrastructures is growing exponentially. Northern Block recognized early on that trust registries play a crucial role not only as utilities that make claims about ecosystem participants publicly available but also as market makers. Trust registries provide a platform for entities to verify identity and authority information, adding integrity during transactions. A trust registry is only valuable if the data inside it has integrity, and this integrity is primarily achieved through strong governance. Without clear governance and conformance programs built on it, the system risks becoming a “garbage in, garbage out” scenario. Our partnership with DGI ensures that our governance module and trust registry administration processes are aligned with the highest standards, safeguarding the integrity of the data.

DGI brings unmatched governance expertise to the table, having authored the only fully-formed governance toolkit for the Trust Over IP Foundation deployed in high-assurance environments such as Bhutan’s National Digital Identity Ecosystem and the Global Legal Entity Identifier Foundation (GLEIF). Their work on governance and conformance aligns perfectly with Northern Block’s mission to deliver high-assurance digital trust ecosystems, supported by open standards and transparent governance.

Value for the Industry

By combining forces, we are creating a total governance solution for the industry—one that ensures trust registries are not only technically sound but also governed by strong, generally accepted standards. This collaboration will help digital identity and verifiable credential ecosystems establish integrity, trust, and transparency, driving adoption in sectors that require high levels of assurance and governance.

Our joint offering adheres to globally recognized standards, with both Northern Block and DGI being leaders in the Trust over IP (ToIP) Foundation, contributing to technical and governance-related work respectively. Our work puts into practice key standards, including the Trust Registry Query Protocol and the Governance Framework Metamodel. The Trust Registry Query Protocol allows any entity to interact with a trust registry by asking a simple question: “Does Entity X have Authorization Y, in the context of Ecosystem Governance Framework Z?” Meanwhile, the Governance Framework Metamodel and toolkit help establish and implement risk-based governance for ecosystems, having already been successfully deployed in major initiatives. 

This partnership brings together the governance and accountability of service providers that conform to governing authority requirements and the technical assurance to rely upon its scheme. This ensures our solution remains interoperable and scalable, allowing clients to leverage the best available technology and governance practices.

For our active customers, this partnership provides significant value. Credential issuers will benefit from increased robustness around their ecosystem credentials, enhancing their value both within and outside their respective ecosystems. Credential verifiers will gain greater confidence when interacting with holders, knowing they are practicing data minimization and requesting only the necessary data proofs, while also being able to accept credentials from other ecosystems. Credential holders will be better equipped to authenticate and verify any public entity they interact with, improving trust and security for them.

This value extends beyond digital credential use cases. Trust registries also provide significant benefits in other types of digital interactions, such as within browsers (e.g., trusted web domains), email clients (e.g., trusted emails), platforms (e.g., trusted content) or APIs (e.g., trusted access). However, for these registries to deliver real value, they must be backed by strong governance and robust processes—this is the core focus of our collaboration with DGI.

About Northern Block and DGI

Northern Block, founded in 2017 with offices in Toronto, Gatineau, and Amsterdam, has been a leader in digital trust solutions, developing a Trust Registry IaaS that supports ecosystems across various industries. We are passionate about ensuring trust in digital interactions and are working towards building a safer, more reliable digital landscape.

Digital Governance Institute (DGI), based in Bellevue, Washington, is led by Scott Perry, an expert in digital governance frameworks. DGI has provided governance and conformance services to a range of ecosystems, ensuring their solutions meet the highest standards of integrity and accountability.

For more information, please contact:

Northern Block: Website: www.northernblock.io Email: Mathieu Glaude, Founder & CEO – mathieu@northernblock.io Digital Governance Institute: Website: www.digitalgovernanceinstitute.com Email: Scott Perry, Founder & CEO – info@digitalgovernanceinstitute.com

We look forward to bringing this powerful, combined offering to the market and driving the next wave of trusted digital trust solutions.

The post Announcing Our Strategic Partnership with Digital Governance Institute appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Announcing Our Strategic Partnership with Digital Governance Institute appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Spherical Cow Consulting

From Concept to Consensus: Developing Internet Standards

I love the whole Internet standards development process and tend to collect standards development organization (SDO) meeting badges like other people collect Pokémon. (Don’t judge; there are stranger hobbies out there. Granted, none come to mind right now, but I’m sure they exist.) Having been an active part of cross-organization collaborations since around 2001, the… Continue reading From Concept

I love the whole Internet standards development process and tend to collect standards development organization (SDO) meeting badges like other people collect Pokémon. (Don’t judge; there are stranger hobbies out there. Granted, none come to mind right now, but I’m sure they exist.) Having been an active part of cross-organization collaborations since around 2001, the process of developing broadly applicable standards is natural to me and deeply mystifying to anyone outside the standards development space.

The creation of an Internet standard is a journey that involves personal autonomy, teamwork, and the voices of countless stakeholders. It requires a mix of individual expertise and collective effort. Arguments, debates, concessions, compromises, and pragmatism are all characteristics of a good standards process.

The Spark of an Idea

How does a standard even get started? Every single one begins with a problem needing to be solved, sparked by a challenge or a gap in the current technology landscape. This is where personal autonomy plays its first crucial role. An individual or a small group identifies a problem and starts brainstorming a solution. This initial phase is all about creativity and innovation, with minimal constraints. It is the part the individual loves most while their management wonders when the problem will actually be _solved_.

But to move from an idea to something actionable, the next step is to find like-minded individuals who share the same problems. While technology is evolving faster than ever, it is rarely entirely new. That means one or more SDOs are almost certainly working in the space. That’s your first stop to finding others who will likely see the value in the idea and are willing to invest time and effort in developing it further.

Building Consensus: The Heart of Internet Standards Development

Of course, finding a home for an idea to turn into an Internet standard is critical; it’s also the point that people new to the idea of standards start to get overwhelmed. Depending on the type of SDO involved—whether it’s a treaty-based, industry-based, or de facto community-based organization—the process towards standardization will vary. So. Much. Process. That said, though, the core principle remains the same: consensus.

Consensus is the lifeblood of standards. It’s the point where autonomy meets teamwork. Each participant brings a unique perspective, whether they are representing a government, a corporation, or their own independent expertise. The goal is to hammer out the technical details in a way that works for everyone—or at least for most stakeholders involved.

At this stage, the process can become incredibly challenging. It’s not just about getting the technical details right; it’s about navigating the complex web of competing interests. For example, treaty-based SDOs often involve nation-state politics, where technical merit might take a backseat to broader geopolitical concerns. Meanwhile, industry-based SDOs have to balance the needs of various commercial entities, each with its own agenda.

For those engineers who want to focus purely on the tech, the non-technical skills required to move an idea forward can be excruciating to develop.

The Role of Stakeholder Engagement

And speaking of non-technical skills, the people who came up with the initial idea cannot be the only people who can offer input into the standard. There are _always_ additional stakeholders that need to be brought in. If people and organizations don’t have a say, they may not adopt the standard. A standard that isn’t adopted is ultimately a waste of time and energy. So, all this means that the process must be open enough to allow for broad participation but structured enough to keep things moving forward.

Sometimes, the standard might be developed within a small, focused community before it’s presented to a broader audience. This is often the case with de facto or community-based SDOs, where the initial work is done by a committed group of experts who are passionate about the topic. It’s my favorite way of doing things. These standards can gain significant influence if they garner widespread adoption, often transitioning into more formalized industry-based standards over time.

Publication and Beyond: The Long Tail of Internet Standards Work

Through dangers untold and hardships unnumbered (how to say you’re Gen X without saying you’re Gen X) or, more to the point, after much discussion, negotiation, and revision, the standard is finally ready for publication. Enter in MORE PROCESS. Each SDO will have a process for its participants or members to indicate support for the proposed standard to be published by that SDO.

If you’ve done your work and engaged a broad swath of stakeholders, then the approval part of the process will go much more smoothly. Standards need to be implemented, which often means dealing with feedback from those putting them into practice in the real world.

This can be frustrating for those not involved in the early stages. It’s not uncommon to hear complaints that a standard doesn’t quite fit the needs of a particular organization or use case. Hearing that at the point the initiating working group thinks it’s all done is, to say the least, wildly frustrating. And I mean frustrating for both the group that worked on the standard and the organizations who are only just hearing about it at the end of the game. Still, even late in the game, having that opportunity to catch any missing bits is important. To paraphrase an old adage: The best time to get involved in standards development was years ago, but the second best time is now.

Why You Should Care

Now, on to why you should care about the standards development process: The standards that come out of these efforts have a direct impact on how you (as vendors, enterprises, humans, etc.) operate in the digital world. For technologists, these standards shape the tools and protocols that we rely on every day. And the more diverse the input into these standards, the better they will be at addressing the needs of the global community.

By engaging in the standards development process, you not only contribute to the betterment of the industry but also ensure that your organization’s needs are met. If you’re not ready to dive in at the deep end, there are many ways to get involved. Start by participating in a community group or contributing to the early thoughts through organizations like IDPro® or conferences like the Internet Identity Workshop (IIW).

Wrap-Up: Your Role in Shaping the Future

The journey from an idea to a published standard is long and complex, but it’s also incredibly rewarding. It’s rewarding personally because while it requires the input of many, each person brings their own autonomy and expertise to the table. And from an organization’s perspective, it’s where you truly demonstrate your thought leadership on how the technology in your field should evolve. And if you’re worried you’re “not technical enough,” I promise you that there’s a role for you to play in shaping the future of Internet standards.

So, if you’ve ever found yourself frustrated by a standard that doesn’t quite meet your needs, consider this: You have the power to change that. Get involved, make your voice heard, and be part of the team that’s building the digital world of tomorrow.

If you’re interested in learning more about navigating this process or need support in engaging with standards development, don’t hesitate to reach out. With my experience across various SDOs, I’m here to help guide you through the complexities of Internet standards development.

The post From Concept to Consensus: Developing Internet Standards appeared first on Spherical Cow Consulting.


Ocean Protocol

Crypto Model Factoring: Data Challenge Podium

In collaboration with Numerai, we challenged data scientists to create custom datasets and built multi-factor models to explain cryptocurrency price variance. The Crypto Factor Modeling Data Challenge invited participants to analyze cryptocurrencies by developing models that explain price fluctuations. Participants gathered data from sources such as Tardis, Kaiko, CCXT, and Uniswap to create cust

In collaboration with Numerai, we challenged data scientists to create custom datasets and built multi-factor models to explain cryptocurrency price variance.

The Crypto Factor Modeling Data Challenge invited participants to analyze cryptocurrencies by developing models that explain price fluctuations. Participants gathered data from sources such as Tardis, Kaiko, CCXT, and Uniswap to create custom datasets. They then used these datasets to build multi-factor risk models identifying the factors driving cryptocurrency prices.

Numerai’s objective with this competition was to lay the groundwork for understanding factors in the cryptocurrency markets, aiming to determine whether crypto factor investing or risk models could even be feasible. The competition emphasized several key elements in the reports: creativity, methodology, statistical rigor, explanatory power, conclusiveness, and reproducibility.

The winning submissions demonstrated one or more of these qualities by crafting innovative features and factors, applying statistical techniques to uncover their relationship with cryptocurrency price variance, and proving the predictive importance of these factors.

Additionally, the winners openly shared their code and adhered to the minimum statistical rigor required for peer review. Some notable findings revealed that factors such as volatility, momentum, value, and sentiment were predictive of crypto price movements. While these findings may align with factors seen in traditional stock markets, conducting this research openly and transparently is essential.

Numerai has fostered a community of expert data scientists who can build upon these reports, further enhancing the predictive capabilities in crypto markets. These reports could serve as a starting point for crypto hedge funds interested in developing risk models or exploring factor-based investing in cryptocurrency.

Top submissions “Crypto Factor Modeling Data Challenge” 1st Place: NeuralNinja

The report by NeuralNinja details a structured methodology for developing multi-factor risk models to predict cryptocurrency price movements. It combines a wide range of data sources, including macroeconomic indicators from the World Bank, historical cryptocurrency prices obtained through the CCXT library, and market sentiment data from Google Trends. The report emphasizes the importance of data quality, meticulously cleaning and merging datasets to create a robust foundation for analysis. Key technical indicators such as RSI, MACD, and Bollinger Bands are calculated to enhance the dataset’s predictive power. The report also highlights the integration of features from the Numerai platform, adding depth to the analysis. This comprehensive approach ensures a thorough understanding of cryptocurrency price movements.

In the feature engineering phase, the report outlines techniques to create new variables that capture temporal dependencies and market dynamics. Lagged features of key indicators are introduced to account for past price movements while rolling statistics provide insights into longer-term trends. The report also discusses the calculation of momentum and volatility measures and interaction terms that explore combined effects on price movements. The final dataset is comprehensive, integrating macroeconomic factors, market data, technical indicators, and sentiment measures, setting the stage for developing and evaluating effective multi-factor risk models to understand and predict cryptocurrency price fluctuations.

2nd Place: Ahan

The report A Detailed Case Study on Crypto Multi-factor Risk Analysis by Ahan investigates cryptocurrency investment strategies through a multi-factor framework traditionally used in equity markets. It highlights the rapid growth of the cryptocurrency market, which reached a capitalization of approximately $1,676 billion in 2023, with Bitcoin and Ethereum being the dominant assets. The study employs various financial models, including the Fama-MacBeth regression, Fama-French models, and machine learning techniques, to analyze the predictive capabilities of factors such as market, size, value, and momentum. It emphasizes the need for a tailored approach to understand cryptocurrency returns and risks due to their unique characteristics and high volatility compared to traditional assets.

Key findings reveal that traditional models like CAPM are less effective in explaining cryptocurrency returns, while modified Fama-French models incorporating cryptocurrency-specific factors provide better insights. The analysis indicates that smaller cryptocurrencies often outperform larger ones, mirroring trends in equity markets. Additionally, investor sentiment and social media influences significantly impact cryptocurrency pricing. The research suggests that systematic inconsistencies in the market could allow for return predictability, urging a reevaluation of conventional investment evaluation methods to accommodate the distinct dynamics of cryptocurrencies.

3rd Place: Malihe

The report submitted by Malihe focused on a multi-factor model for forecasting cryptocurrency returns, analyzing nearly 120 cryptocurrencies by integrating various market, economic, and social media factors. Key market factors include momentum, market cap, liquidity, and volatility, while economic indicators such as the Federal Funds Effective Rate and inflation rates were also examined. Data was sourced from CCXT, Coingecko, Google Trends, and FRED. The study found strong correlations between returns and the High Minus Low (HML) factor, indicating its significant influence on performance. However, economic factors showed weak correlations when analyzed in isolation.

The modeling results revealed that HML and momentum are significant predictors of returns, while other factors like market cap and volatility did not demonstrate substantial effects. The analysis also highlighted that higher liquidity is generally associated with better market performance. Interestingly, Google Trends data was included but showed weak correlations with cryptocurrency returns, suggesting it may not be a reliable standalone predictor. Overall, the findings emphasize the importance of understanding market dynamics, particularly momentum and value factors, to inform investment strategies in the volatile cryptocurrency landscape.

Interesting Facts

Rapid Market Growth: Since the launch of Bitcoin in 2008, the cryptocurrency market has exploded, reaching a capitalization of approximately $1,676 billion in 2023, with Bitcoin and Ethereum representing 41.8% and 18.1% of this market, respectively.

High Volatility: Cryptocurrencies are known for their extreme volatility, with most exhibiting beta values greater than 1. For instance, Dogecoin has a beta of approximately 3.045, indicating it is over three times as volatile as the S&P stock market.

Emergence of Specialized Funds: Over 170 hedge funds focused on cryptocurrencies have emerged since 2017, highlighting the growing institutional interest in crypto trading and hedging strategies.

Predictive Models: Traditional models like CAPM are less effective for cryptocurrencies, while modified Fama-French models that include cryptocurrency-specific factors like size and momentum provide better insights into returns.

Investor Sentiment Impact: Social media sentiment significantly influences cryptocurrency pricing, indicating that market psychology plays a crucial role in determining returns.

2024 Championship

The challenges offer prize pools from $10,000 to $20,000, distributed among the top 10 participants. Our points system for the championship allocates between 100 and 200 points to the top 10 finishers in each challenge, with each point valued at $100. Participants accumulate these points toward the 2024 Championship. Last year, the top 10 champions received an additional $10 for each point they had earned.

2024 Championship standings prior to the Crypto Model Factoring challenge

Additionally, the top 3 participants in each challenge can collaborate directly with Ocean to develop a profitable dApp based on their algorithm. Data scientists maintain their intellectual property rights while we provide support in monetizing their innovations.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to stay up to date. Chat directly with the Ocean community on Discord, or track Ocean’s progress on GitHub.

Crypto Model Factoring: Data Challenge Podium was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Tokeny Solutions

How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance

The post How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance appeared first on Tokeny.

Product Focus

How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance

This content is taken from the monthly Product Focus newsletter in September 2024.

In previous product newsletters, we focused mainly on the technical features we’ve developed. In this edition, we’d like to highlight how our onchain operating system is utilized by one of the most crucial stakeholders in tokenized funds: fund administrators.

Fund administrators aiming to manage tokenized funds need onchain tools to handle onboarding, compliance, asset management, and secondary market operations seamlessly. Tokeny provides a complete suite of solutions, offering the necessary tools for investor onboarding, token issuance, compliance rules management, full lifecycle fund servicing, and secondary market functionality—ensuring a smooth transition to onchain fund management.

How Our Products Meet Fund Adminstrators’ Needs at Each Stage:

Onboarding: Fund administrators require a streamlined onboarding process with KYC checks and secure payment collection. Our Investor App, part of the Tokeny Platform, offers a comprehensive solution for collecting investor information, conducting digital verification, and supporting payments in a fully integrated and digital manner.

Onboarding Needs Solutions Allow easy browser of offers List asset offers and details of assets Collect investor info Customizable form fields and digital workflows Digital verification and signing Integrated digital verification and signing tools (e.g. SumSub, DocuSign) Automate calculation Automated calculation of exchange rates and fees Collect payments Support multi-currencies and all payment methods from fiat to onchain cash and crypto

Issuance: Fund administrators need to represent assets onchain in a compliant manner. Tokeny Platform (turnkey solution) or T-REX Engine (APIs) allows them to tokenize assets on any preferred blockchain with upgradable smart contracts.

Issuance Needs Solutions Flexible blockchain support Support any EVM chain with a switch chain feature Represent assets onchain Token detail setup and one-click token deployment Compliance setup Set investor rules and transfer restrictions Upgradability Upgradable smart contracts

Servicing: Managing onchain funds requires full lifecycle servicing, from compliance to investor relations. T-REX Platform or T-REX Engine enables managing KYC/AML, investor data, cap tables, and token controls, automating many operational tasks while keeping real-time records of ownership.

Servicing Needs Solutions KYC/AML Management Onchain Qualification of investors and automated onchain compliance validation Private Market Management Order management for subscription and redemption Tokenized Assets Management Selective freeze of tokenized assets, suspension of all tokenized assets, mandated transfers, redemption, or token recovery Data Management Manage offering details, identities, and investor details Cap Table Management Real-time ownership records, check positions at any time Investor Relations Built-in email notification tool

Secondary Market: Handling secondary market operations is key to increasing liquidity. Tokeny Platform or T-REX Engine empowers fund administrators to control distribution channels, approve peer-to-peer trades, and verify deposit wallets for compliant trading.

Secondary Market Needs Solutions Automate operations Advanced transfer functions (DvP, etc.) Distribution channel control Authorize distribution channels Secondary transfers Authorize and approve peer-to-peer trades and trading intention offers.

Tokeny’s solutions empower fund administrators to reduce operational friction and ensure full control over compliance, distribution, and data management, all in real-time.

We are excited to work with leading fund administrators globally and look forward to helping more fund administrators accelerate the adoption of onchain finance.

Xavi Aznal Head of Product Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance 20 September 2024 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead 23 August 2024 The Journey to Becoming the Leading Onchain Finance Operating System 19 July 2024 Streamline On-chain Compliance: Configure and Customize Anytime 3 June 2024 Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance appeared first on Tokeny.


KuppingerCole

Nov 21, 2024: Passkeys in a Zero Trust World – Blessing or Curse? 

In the modern digital landscape, organizations are confronted with growing cybersecurity challenges that demand stronger authentication methods. Zero Trust frameworks have become essential for bolstering security postures, placing a significant emphasis on identity verification. As traditional passwords become more vulnerable, passkeys are gaining traction for their phishing-resistant capabilities
In the modern digital landscape, organizations are confronted with growing cybersecurity challenges that demand stronger authentication methods. Zero Trust frameworks have become essential for bolstering security postures, placing a significant emphasis on identity verification. As traditional passwords become more vulnerable, passkeys are gaining traction for their phishing-resistant capabilities and their potential to transform authentication within Zero Trust environments.

Metadium

Termination of Keepin Service

Dear Community, We want to inform you that the Keepin app will officially terminate all services as of September 30, 2024. We want to express our gratitude to everyone who has used the Keepin app during this time. Your support has been invaluable. Starting October 1, all support for the service, including updates, new downloads, and operational support, will end. We truly appreciate your s

Dear Community,

We want to inform you that the Keepin app will officially terminate all services as of September 30, 2024. We want to express our gratitude to everyone who has used the Keepin app during this time. Your support has been invaluable.

Starting October 1, all support for the service, including updates, new downloads, and operational support, will end.

We truly appreciate your support and understanding and apologize for any inconvenience caused by the service’s discontinuation.

Thank you.

안녕하세요. 메타디움 팀입니다.

키핀 앱은 2024년 9월 30일을 기점으로 모든 서비스를 공식적으로 종료할 예정임을 알려드립니다.

그동안 키핀 앱을 이용해 주신 모든 분께 진심으로 감사드립니다.

10월 1일부터 업데이트, 신규 다운로드, 운영 지원 등 모든 서비스에 대한 지원이 종료됩니다.

그동안 키핀 앱을 이용해 주신 여러분께 진심으로 감사드리며, 더 이상 서비스를 지속하지 못한 점 양해 부탁드립니다.

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Termination of Keepin Service was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 19. September 2024

KuppingerCole

IGA als Herzstück eines jeden Security-Transformations-Programms

In der heutigen digitalen Landschaft stehen Unternehmen vor wachsenden Herausforderungen im Bereich der Cybersicherheit. Angriffe auf digitale Identitäten nehmen zu und sind oft erfolgreich, wie jüngste Vorfälle zeigen. Gleichzeitig ist die digitale Identität eine Schlüsselkomponente für Zero-Trust-Architekturen, die den kontrollierten Zugriff auf Unternehmensdaten ermöglichen. Identity and Admi

In der heutigen digitalen Landschaft stehen Unternehmen vor wachsenden Herausforderungen im Bereich der Cybersicherheit. Angriffe auf digitale Identitäten nehmen zu und sind oft erfolgreich, wie jüngste Vorfälle zeigen. Gleichzeitig ist die digitale Identität eine Schlüsselkomponente für Zero-Trust-Architekturen, die den kontrollierten Zugriff auf Unternehmensdaten ermöglichen.

Identity and Administration (IGA) spielt eine zentrale Rolle bei der Bewältigung dieser Herausforderungen. Moderne IGA-Lösungen bieten umfassende Funktionen zur Verwaltung digitaler Identitäten, von der Automatisierung von Zugriffsrechten bis hin zur Erkennung von Anomalien. Diese Fähigkeiten sind entscheidend für die Implementierung robuster Sicherheitsstrategien in zunehmend komplexen IT-Umgebungen.

Dr. Phillip Messerschmidt, Lead Advisor bei KuppingerCole, wird aktuelle Autorisierungstrends rund um die IGA mit einem starken Fokus auf Autorisierungsmodelle und strategische Überlegungen zur Realisierung ihrer Vorteile untersuchen. Anhand einiger Beispiele wird er Anwendungsfälle der verschiedenen Autorisierungsmodelle erläutern und aufzeigen, wie sie Unternehmen bei der Umsetzung ihrer IAM- und Cybersicherheitsstrategie unterstützen.

Klaus Hild, Identity Strategist bei SailPoint, und Moritz Anders, Partner für Digital Identity bei PwC, werden praktische Einblicke in die Implementierung von IGA-Lösungen geben. Sie werden erläutern, wie SailPoint's Identity Security Platform die verschiedenen IGA-Fähigkeiten unterstützt und wie PwC's Capability-Modell für Identity & Access Management große IAM-Transformationen ermöglicht.




auth0

Protecting REST APIs Behind Amazon API Gateway Using Okta

Learn how to set up an Amazon API Gateway and secure the REST API with Okta Customer Identity Cloud (CIC) Adaptive MFA Risk Score
Learn how to set up an Amazon API Gateway and secure the REST API with Okta Customer Identity Cloud (CIC) Adaptive MFA Risk Score

Thales Group

INDRA AND THALES STRENGTHEN THE INTELLIGENCES OF THE BATTLEFIELD MANAGEMENT SYSTEM FOR THE SPANISH ARMY

INDRA AND THALES STRENGTHEN THE INTELLIGENCES OF THE BATTLEFIELD MANAGEMENT SYSTEM FOR THE SPANISH ARMY Language English omnia.anis Thu, 09/19/2024 - 14:11 The Spanish Army has been operating with the Battlefield Management System (BMS) from Indra and Thales since 2021 with excellent results. Both companies will now equip it with increased processing
INDRA AND THALES STRENGTHEN THE INTELLIGENCES OF THE BATTLEFIELD MANAGEMENT SYSTEM FOR THE SPANISH ARMY Language English omnia.anis Thu, 09/19/2024 - 14:11 The Spanish Army has been operating with the Battlefield Management System (BMS) from Indra and Thales since 2021 with excellent results. Both companies will now equip it with increased processing power and performance to operate in tactical environments where increasingly intelligent platforms and systems are exchanging a growing volume of data to gain an advantage over the adversary. Indra and Thales are also working to lighten the architecture of the system so that it can be installed on tablets, which will make the units more mobile and not solely dependent on the in- vehicle system. The BMS system, which ensures maximum interoperability with allied armies and is one of the most advanced solutions of its kind in the world, enables commanders to make the best decisions faster and faster.

Madrid September 19, 2024.- Indra and Thales reinforce the capacity of the Spanish Army's Battlefield Management System (BMS) and prepare it to operate in highly digitalised scenarios, where the exchange of data and the level of coordination of the force is extremely high and critical to gaining an advantage over the adversary.

Both companies will evolve this system, which they developed themselves and which went into operation in 2021. The system has become one of the most advanced of its kind and a benchmark for armies worldwide.

The BMS enables commanders to monitor and disseminate orders in real time, helping them to make the best decisions quickly, and provides deployed units in the field with a complete view of the mission on digital mapping, allowing them to exchange tactical information, images and text messages to coordinate, which multiplies their effectiveness.

Increased processing power and performance

The current goal of this evolution is now to increase the processing power and performance of the system so that it can handle more information, which will provide greater situational awareness and force coordination, thus adapting it to a context in which the volume of data exchanged between platforms and weapon systems is constantly growing.

Antonio Hernández Bejarano, director of Business Development for Electronic Combat Management at Indra, explains that "the improvements also make it possible to exploit the capabilities of the new transmission media, making the most of the available bandwidth while providing the capacity, in a transparent way, to dynamically adapt the information flows in order to work in contested environments, where the adversary tries to impede your communications".

Thales Director of Projects, Juan José Forteza, underlines that "the system has been designed to guarantee interoperability with other allied armies, in line with NATO's Federated Mission Networking (FMN) standards, which facilitate the integration of the different command and control networks of allied countries, something crucial in the current context".

Both companies will also streamline the system architecture to enable it to be installed on tablets, which will allow for high mobility requirements. An additional benefit will be the integration of the BMS with the Army Logistics Management System (SIGLE) to reduce the workload associated with armour maintenance and improve the sustainment of vehicles and tanks throughout their life cycle, increasing their availability and the safety of crews.

The BMS system has demonstrated excellent performance in real missions of maximum complexity, such as the NATO Enhanced Forward Presence Mission (EFP) in which the Spanish Army was deployed.

About Indra

Indra is the Spanish multinational of reference and one of the leading global defence, air traffic and space companies that, through technology, protects our way of life today and anticipates the needs of the future. Its committed team of experts, its deep knowledge of the business and the latest technologies, and its unique capacity for innovation and systems integration, make it the trusted technology partner for key operations and the digitisation of its customers around the world. Thanks to its leadership in large European programmes and projects, as well as its collaborative spirit and partnership strategy, it drives the industrial and innovative ecosystem in these sectors.

About Indra Group

Indra Group is a holding company that promotes technological progress, which includes Indra, a leading global defence, air traffic and space company; and Minsait, a leader in digital transformation and information technologies in Spain and Latin America. Indra Group drives a safer, more secure and connected future through innovative solutions, trusted relationships and the best talent. Sustainability is part of its strategy and culture, in order to respond to present and future social and environmental challenges. At year-end 2023, Indra Group had revenues of 4,343 million euros, more than 57,000 employees, local presence in 46 countries and commercial operations in more than 140 countries.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies in three domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital Identity. Thales develops products and solutions that help make the world a safer, greener and more inclusive place.

The Thales Group invests close to four billion euros a year in Research and Development, particularly in key areas such as quantum technologies, edge computing, 6G and cybersecurity.

Thales employs 81,000 professionals in 68 countries. In 2023, the Thales Group generated sales of €18.4 billion.

In Spain, Thales employs more than 1,200 people, almost all of them engineers, in the aerospace, defence and security and cybersecurity and digital identity fields, covering all Thales markets and with 13 work centres distributed throughout the country. In Madrid, Thales houses the global competence centre for Border&Travel and in the field of cybersecurity, the SOC (Security Operation Center), from which the cyber threats of customers in Southern Europe are managed.
 

/sites/default/files/database/assets/images/2024-09/Indra%20-%20Thales%20-%20Sistema%20BMS%20-%20versi%C3%B3n%20ligera%20para%20Tablet_2.jpg Contacts Clara Martín, Communications Department 19 Sep 2024 Spain Type News Hide from search engines Off

Elliptic

Crypto regulatory affairs: UAE takes steps to bolster its crypto regulatory framework

The United Arab Emirates continues to take important steps to cement its status as a hub for well-regulated cryptoasset activity. 

The United Arab Emirates continues to take important steps to cement its status as a hub for well-regulated cryptoasset activity. 


KuppingerCole

Open Multi-Cloud, Intelligent Business Applications, and Security Robots

by Alexei Balaganski Last week, I had an opportunity to attend CloudWorld 2024. Oracle uses its flagship event to unveil the most important announcements of the year, and after the break caused by the Covid pandemic, it was moved from San Francisco to Las Vegas. To be honest, I’m not a fan of the city’s scorching heat (it was over 40 degrees C outside at times). Thankfully, the agenda created by

by Alexei Balaganski

Last week, I had an opportunity to attend CloudWorld 2024. Oracle uses its flagship event to unveil the most important announcements of the year, and after the break caused by the Covid pandemic, it was moved from San Francisco to Las Vegas. To be honest, I’m not a fan of the city’s scorching heat (it was over 40 degrees C outside at times). Thankfully, the agenda created by the company’s analyst relations team was so packed that I spent most of the four days inside the air-conditioned venue, attending keynotes and sessions, talking to Oracle’s executives and customers, and, of course, networking with other analysts. Here are some of my takeaways from the event.

The Open Multi-cloud Era

The beginning of this era has been announced by Larry Ellison in his keynote, when it was unveiled that Oracle now has strategic partnerships with each of the Big Three cloud providers to make Oracle Autonomous Database available directly in their respective infrastructures, with full feature parity with OCI’s own services and without the latency issues of traditional multi-cloud deployments. What it essentially means that Oracle’s engineers deploy the company’s Oracle Cloud Infrastructure, specifically its Exadata platform and Oracle databases, directly in Microsoft’s, Google’s, and AWS’ own cloud datacenters and make it available to their customers through the native user interface, billing, and technical support channels of each provider.

Now, some purists might argue that this architecture is not really multi-cloud, since everything is contained within the infrastructure of each provider, and data does not flow between clouds (which, incidentally, is great news for AWS customers, since they don’t need to worry about egress fees). However, what’s important for customers is that they can now combine the best native services of each provider with all the latest features of the database they know and love for decades.

There is something ironic about Oracle going full circle—from the company’s roots in offering “a database that runs everywhere” on-premises to the new cloud model introducing a whole zoo of partially incompatible database services across providers to finally bringing the same “everywhere” promise back to life at an entirely different scale.

On a somewhat related note—the concept of “private cloud” is also undergoing a profound change. Oracle is known for offering a broad range of cloud deployment options to their customers—calling this flexible portfolio their “Distributed Cloud”. This year, the company announced the new OCI Dedicated Region25 that will be available in a smaller, scalable size starting at only three racks and rapidly deployable within weeks. It has a 75% smaller launch footprint and simplified datacenter requirements and supports OCI’s 150+ AI and cloud services. What used to be possible only for large enterprises is now much more affordable.

AI Transforms Everything

Of course, artificial intelligence was another major topic during the conference—for both the company and its customers and partners. And Oracle had tons of announcements of new AI features and capabilities throughout their entire portfolio. At the infrastructure level, for example, OCI Supercluster, announced for 2025, will be the largest ever hardware platform, labeled as a Zettascale supercomputer, powered by over 100K Nvidia GPUs to run the most demanding AI workloads.

Both Oracle Database 23ai and HeatWave offer a multitude of built-in AI capabilities, from somewhat overlooked but still extremely useful machine learning algorithms to vector search that brings enterprise data to generative AI models. Needless to say, the big differentiator for both solutions, as opposed to specialized vector databases, is the ability to keep data in multiple formats (relational, graph, JSON, and now vector) in the same database and to run complex hybrid queries across them. We had an opportunity to hear from customers already using these capabilities in production, and the general agreement was that it just worked without any additional learning curve.

All Oracle’s industry apps and business analytics solutions have received major new AI-powered capabilities as well. Curiously, even Oracle APEX, the company’s “hidden gem”, the low-code application development platform, has received a major boost from the AI hype. For quite a while already, the APEX team has been working on a new programming language, more abstract and human-readable, to replace its original PL/SQL and make APEX much more compatible with modern CI/CD pipelines. However, this development, in the form of an AI Assistant, also enabled them to make APEX apps generatable using a conversational approach. Apparently, using this technology internally already allows Oracle to develop their business apps 10 times faster.

One major concern I was happy to hear addressed during the event is what I call “AI agility.” Just like with cryptography, where quantum computers can potentially make existing algorithms irrelevant overnight, the current state of the AI market is also extremely unpredictable. Who knows which vendors, models, and algorithms will still survive in the next five years? Any sensible organization should be prepared to be agile with their AI deployments and ready to address these risks.

Securing Apps with Data Robots

Apparently, Larry Ellison loves robots. At least that was his term for Oracle’s approach towards security. To secure sensitive data in the cloud at a massive scale, humans are no longer good enough. Automation is the only viable approach, and Oracle has a plan for that, too. Needless to say, their robot DBA is, of course, the Oracle Autonomous Database, and by next year, they promise to migrate all their apps and services to it, eliminating the human factor from database operations and security. They also plan to get rid of password-based authentication completely, which is definitely a welcome if somewhat bold promise.

Zero Trust Packet Routing (ZPR) is another promising development, introducing the identity component and security attributes to the network fabric of the Oracle Cloud. This approach combines Zero Trust with policy-based access controls to ensure that security policies can be applied independently of the underlying network configuration. This technology is currently in early access.

My biggest takeaway from CloudWorld 2024 is that Oracle is not just delivering a long list of new features for its customers—all of these developments are incorporated into the company’s entire product and service portfolio. In the end, Oracle is not afraid of eating its own dog food while customers gain value from Oracle’s integrated, full-stack approach.


Ocean Protocol

DF107 Completes and DF108 Launches

Predictoor DF107 rewards available. DF108 runs Sept 19 — Sept 26, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 107 (DF107) has completed. DF108 is live today, Sept 19. It concludes on September 26. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&n
Predictoor DF107 rewards available. DF108 runs Sept 19 — Sept 26, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 107 (DF107) has completed.

DF108 is live today, Sept 19. It concludes on September 26. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF108 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF108

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF107 Completes and DF108 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Wednesday, 18. September 2024

Microsoft Entra (Azure AD) Blog

Microsoft Entra Internet Access now generally available

With the rise of hybrid work, identity and network security professionals are now at the forefront of protecting their organizations. Traditional network security tools fall short in meeting the integration, complexity, and scale requirements of anywhere access, leaving organizations exposed to security risks and poor user experiences. To address this, network security and identity must function a

With the rise of hybrid work, identity and network security professionals are now at the forefront of protecting their organizations. Traditional network security tools fall short in meeting the integration, complexity, and scale requirements of anywhere access, leaving organizations exposed to security risks and poor user experiences. To address this, network security and identity must function as a unified force in defense. Only when identity and network controls deeply integrate into secure access, can we fully deliver on the core Zero Trust principles, where trust is never implicit and access is granted on a need-to-know and least-privileged basis across all users, devices, and applications.

 

Microsoft Entra Internet Access

 

On July 11th, 2024, we announced general availability (GA) of Microsoft Entra Suite, which includes Microsoft Entra Internet Access, part of the Security Service Edge (SSE) solution. Internet Access secures access to all internet and SaaS applications and resources with an identity-centric secure web gateway (SWG) solution, unifying identity and network access controls through a single Zero Trust policy engine to close security gaps and minimize the risk of cyberthreats. Our solution integrates seamlessly with Microsoft Entra ID, eliminating the need to manage users, groups, and apps in multiple locations. It protects users, devices, and resources with capabilities such as universal Conditional Access, context aware network security, and web content filtering, so you no longer need to manage multiple disconnected network security tools.

 

Figure 1: Secure access to all internet and SaaS applications and resources, with an identity-centric SWG.

 

 

Unified identity and network security

 

Our deep integration with Entra ID enables Conditional Access, and later continuous access evaluation (CAE), to be extended to any external destination, internet resource, and cloud application, even if they’re not integrated or federated with Entra ID. This integration with Conditional Access enables you to enforce granular controls, leveraging device, user, location, and risk conditions by applying network security policies tailored to the requirements of your enterprise. Additionally, Microsoft Entra Internet Access provides enhanced security capabilities, such as token replay protection and data exfiltration controls, for Entra ID federated applications.

 

Figure 2: Rich user, device, location, and risk awareness of Conditional Access for network security policy enforcement

 

 

Protect your users with context aware network security

 

With Microsoft Entra Internet Access you now can link your network security policies to Conditional Access, providing a versatile tool that can adapt to various scenarios for your SWG policy enforcement. Now with web category filtering, you can easily allow or block a vast range of internet destinations based on pre-populated web categories. For more granular control, you can use fully qualified domain name (FQDN) filtering to establish policies for specific endpoints or override general web category policies effortlessly.

 

For instance, you can create a policy that allows your finance team access to critical finance applications, while restricting access for the rest of your organization. Furthermore, you can add risk-based filtering policies that dynamically adapt to a user’s risk level with Entra ID protection to restrict access to these destinations for members whose user risk is elevated, providing additional protection for your organization. Another great example is just-in-time access to Dropbox, while blocking all other external storage sites, to leverage deep integrations between Microsoft Entra Internet Access, Conditional Access and Entra ID Governance workflows.

 

In the coming months, we’ll be adding new capabilities such as TLS inspection and URL filtering to provide even more granular control for your web filtering policies. Plus, we’ll be adding Threat Intelligence (TI) filtering to prevent users from accessing known malicious internet destinations.

 

 

Provide defense in depth against token replay attacks with Compliant Network check

 

With the addition of the new Compliant Network control, you can prevent token replay attacks across authentication plane by extending Compliant Network check with Conditional Access for any Entra ID federated internet application, including Microsoft 365 applications. This feature also ensures that users cannot bypass the SSE security stack while accessing applications. Compliant network eliminates inherent disadvantages of source IP based location enforcement – that of cumbersome IP management and traffic hair pinning of remote users through branch networks.

 

 

Protect against data exfiltration by enabling universal tenant restrictions (TRv2) controls

 

With Microsoft Entra Internet Access you can enable Universal Tenant Restriction controls across all managed devices and network branches, agnostic of OS and browser platform. Tenant Restriction v2 is a strong data exfiltration control enabling you to manage external access risks from your managed devices and networks by curating a granular allow or deny list of foreign identities and applications that can or cannot be accessed.

 

Figure 5: Universal tenant restrictions

 

Avoid obfuscating original user source IP

 

Traditional third-party SSE solutions hide the original source IP of users, only showing the proxy IP address, which degrades your Entra ID log fidelity and Conditional Access controls. Our solution proactively restores original end-user source IP context for Entra ID activity logs and risk assessment. It also maintains backward compatibility for source IP based location checks in your Conditional Access policies.

 

 

Deliver fast and consistent access at a global scale

 

Our globally distributed proxy, with multiple points of presence close to your user, eliminates extra hops to optimize traffic routing to the internet. You can connect remote workers and branch offices through our global secure edge that’s only milliseconds away from users. We have thousands of peering connections with internet providers and SaaS services, and for services like Microsoft 365 and Azure, you avoid performance penalties through additional hops and improve overall user experience by sending the traffic directly to Microsoft WAN infrastructure.

 

Figure 7: Microsoft's global Wide Area Network (WAN)

 

Attain deep insights and network analytics using in-product dashboards:

 

Our comprehensive in-product reports and dashboards are designed to be easy to digest and share a complete holistic view of your entire ecosystem within your organization. You can monitor deployment status, identify emerging threats through comprehensive network and policy monitoring logging, and address problems quickly. Our dashboard delivers an overview of the users, devices, and destinations connected through Microsoft’s SSE solution. We show cross-tenant access within your enterprise, as well as the top network destinations in use and other policy analytics.

 

Figure 8: In-product dashboard

 

Microsoft Entra Internet Access architecture overview

 

Microsoft’s SSE architecture for client and branch connectivity streamlines network access and security. Global Secure Access standalone client on the endpoint is currently available for Windows and Android; MacOS and IOS are coming soon. Branch connectivity relies on site-to-site connections from network devices to Microsoft’s SSE edge services; Microsoft traffic is now available, with Internet Access Traffic being added soon. Traffic from both client and branch connectivity models is secured and tunneled through Microsoft’s SSE edges. Additionally,  we have partnered with HPE Aruba and Versa to integrate our SSE solution with their SD-WAN offerings, with additional SD-WAN partners coming soon.

 

Side-by-side interoperability with third-party SSE solutions

 

One of the unique advantages of Microsoft’s SSE solution is its built-in compatibility with third-party SSE solutions where it allows you to acquire only the traffic you need to send to Microsoft’s SSE edges. For example, you can enable the Microsoft Traffic profile to manage Microsoft 365 and Entra ID traffic and optimize performance for your Microsoft applications while using other providers for remaining traffic. Configuring traffic forwarding profiles is straightforward, allowing for precise control over traffic for internet and SaaS traffic, including Microsoft 365. Traffic profiles are also user aware and can be directed to specific groups in your enterprise as appropriate.

 

Figure 9: Flexible deployment options

 

Conclusion

 

Microsoft Entra Internet Access offers a robust, identity-centric SWG solution that secures access to internet and SaaS applications. By unifying Conditional Access policies across identity, endpoint, and network, it ensures every access point is safeguarded, adapting to the needs of a hybrid workforce and mitigating sophisticated cyberattacks. This strategic shift not only enhances security but also optimizes user experience, demonstrating Microsoft's commitment to leading the transition to cloud-first environments.

 

Learn more and get started 

 

Stay tuned for more Microsoft Entra Internet Access blogs and for a deeper dive into Microsoft Entra Private Access. For more information, watch our recent Tech Accelerator product deep dives.

 

To get started, contact a Microsoft sales representative, begin a trial, and explore Microsoft Entra Internet Access and Microsoft Entra Private Access general availability. Share your feedback to help us make this solution even better. 

 

Anupma Sharma, Principal Group Product Manager

 

 

Read more on this topic

Simplify your Zero Trust strategy with the Microsoft Entra Suite and unified security operations platform, now generally available  Microsoft’s Security Service Edge products now in General Availability  Microsoft Entra Internet Access Microsoft Entra Private Access

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community 

Civic

Civic and Rentality Verify Drivers’ Licenses and Age Onchain, Bringing New Standard for Car Rental Security and Compliance

The Civic ID Verification Pass provides real-world benefits to users who can verify their identity and age quickly and rent a car directly from a car owner, without intermediaries SAN FRANCISCO, 18 SEPTEMBER: Civic, a leader in tokenized identity on the verifiable web, joins forces with Rentality, the first web3 car rental platform, to securely […] The post Civic and Rentality Verify Drivers’ Li

The Civic ID Verification Pass provides real-world benefits to users who can verify their identity and age quickly and rent a car directly from a car owner, without intermediaries SAN FRANCISCO, 18 SEPTEMBER: Civic, a leader in tokenized identity on the verifiable web, joins forces with Rentality, the first web3 car rental platform, to securely […]

The post Civic and Rentality Verify Drivers’ Licenses and Age Onchain, Bringing New Standard for Car Rental Security and Compliance appeared first on Civic Technologies, Inc..


Thales Group

Thales contributes to the production of seven additional sections of the SAMP/TNG for the French Air and Space Forces

Thales contributes to the production of seven additional sections of the SAMP/TNG for the French Air and Space Forces prezly Wed, 09/18/2024 - 17:45 © Thales As announced on the 17 September 2024 during the Conference on European Air and Missile Defense in Rome (Italy) by the French Ministry of Defence, Sébastien Lecornu, French Minister of Armed forces offici
Thales contributes to the production of seven additional sections of the SAMP/TNG for the French Air and Space Forces prezly Wed, 09/18/2024 - 17:45
© Thales

As announced on the 17 September 2024 during the Conference on European Air and Missile Defense in Rome (Italy) by the French Ministry of Defence, Sébastien Lecornu, French Minister of Armed forces officialised a contract, through OCCAR-EA, to launch the serial production of seven additional SAMP/T NG sections for the French Air and Space Forces.

The SAMP/T NG system is developed through a cooperation between Italy and France. Natively based on the capability to manage the munitions of the Aster family (A30 B1 and A30 B1NT), it will be able to offer a multilayer capability by integration and coordination of SHORAD and V-SHORAD assets.

Each French SAMPT-NG section will be based on the Thales GF300, multifunction rotating active electronically scanned array radar and on the New Generation Engagement Module (ME-NG) produced by Thales.

The ME-NG is the core of the system. It is developed through a cooperation between Thales and MBDA Italy. The ME-NG is based on a common core hardware and software architecture able to integrate specific national requirements and different radars, to coordinate or to integrate various weapon systems based on diverse munitions.

This order is an additional step towards the renewal of the European Medium-Range Air Defence ground capabilities, following the SAMP/T NG development launched by the two Nations in 2021. The French Air and Space Forces will now benefit from eight SAMP/T NG.

“Thales is proud to contribute to the sovereignty of nations with its most innovative Air Defence technologies, including the New Generation Engagement Module (ME-NG) and the GF300 radar. This contract for seven additional SAMP/T NG for France, is a significant step forward in the European Air Defence. It further strengthens Thales’ role as a trusted partner of the French Air and Space Forces”. Hervé Dammann, Executive Vice-President, Land and Air Systems, Thales.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Design%20sans%20titre%20%2822%29_0.png Documents [Prezly] Thales contributes to the production of seven additional sections of the SAMPT NG for the French Air and Space Forces.pdf Contacts Camille Heck, Thales, Media Relations Land & Naval Defence Alice Pruvot, Head of Media Relations, Aeronautics & Defense 18 Sep 2024 Type Press release Structure Defence and Security Defence As announced on the 17 September 2024 during the Conference on European Air and Missile Defense in Rome (Italy) by the French Ministry of Defence, Sébastien Lecornu, French Minister of Armed forces officialised a contract, through OCCAR-EA, to launch the serial production of seven additional SAMP/T NG sections for the French Air and Space Forces. prezly_690083_thumbnail.jpg Hide from search engines Off Prezly ID 690083 Prezly UUID 88b3d5a5-eac7-4113-a1b4-9bad836b477f Prezly url https://thales-group.prezly.com/thales-contributes-to-the-production-of-seven-additional-sections-of-the-samptng-for-the-french-air-and-space-forces Wed, 09/18/2024 - 19:45 Don’t overwrite with Prezly data Off

Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually

Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually prezly Wed, 09/18/2024 - 15:00 API insecurity and automated abuse by bots responsible for up to 11.8% of cyber events and losses globally Bot-related security incident count rose 88% in 2022 and 28% in 2023 Insecure APIs result in up to $12 billion more in losses than they did in 2021
Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually prezly Wed, 09/18/2024 - 15:00 API insecurity and automated abuse by bots responsible for up to 11.8% of cyber events and losses globally Bot-related security incident count rose 88% in 2022 and 28% in 2023 Insecure APIs result in up to $12 billion more in losses than they did in 2021
@Thales

Imperva, a Thales company, the cybersecurity leader that protects critical applications, APIs, and data, anywhere at scale, releases the “Economic Impact of API and Bot Attacks” report. The analysis of more than 161,000 unique cybersecurity incidents and investigates the rising global costs of vulnerable or insecure APIs and automated abuse by bots, two security threats that are increasingly interconnected and prevalent. The report estimates that API insecurity and bot attacks result in up to $186[1]  billion for businesses around the world.

The report is based on a study conducted by the Marsh McLennan Cyber Risk Intelligence Center which found that larger organizations were statistically more likely to have a higher percentage of security incidents that involved both insecure APIs and bot attacks. Enterprises with revenues of more than $1 billion were 2-3x more likely to experience automated API abuse by bots than small or mid-size businesses. The study suggests that large companies are particularly vulnerable to security risks associated with automated API abuse by bots because of complex and widespread API ecosystems that often contain exposed or insecure APIs.

Enterprises rely heavily on APIs to enable seamless communication between diverse applications and services. Data from Imperva Threat Research finds that the average enterprise managed 613 API endpoints in production last year. That number is growing rapidly as businesses face mounting pressure to deliver digital services with greater agility and efficiency.

Due to this increased reliance and their direct access to sensitive data, APIs have become attractive targets for bot operators. In 2023, automated threats accounted for 30% of all API attacks, according to data from Imperva Threat Research. Today, automated API abuse by bots costs organizations up to $17.9 billion of losses annually. As the number of APIs in production multiplies, cybercriminals will increasingly use automated bots to find and exploit API business logic, circumvent security measures, and exfiltrate sensitive data.

“It’s imperative that businesses across the world address the security risks posed by insecure APIs and bot attacks, or they face a substantial economic burden,” says Nanhi Singh, General Manager of Application Security at Imperva, a Thales company. “The interconnected nature of these threats necessitates that companies take a holistic approach, integrating comprehensive security strategies for both bot and API attacks.”

Some of the key trends identified in the report include:

Increased API adoption and usage is growing the attack surface: The rapid adoption of APIs, inexperience of many API developers, and lack of collaboration between security and development teams has led insecure APIs to now result in up to $87 billion of losses annually, a $12 billion increase from 2021.
​ Bots negatively impact organizations’ bottom line: The widespread availability of attack tools and generative AI models has enhanced bot evasion techniques and enabled even low-skilled attackers to launch sophisticated bot attacks. Up to $116 billion of losses annually can be attributed to automated attacks by bots.
​ API and bot-related security incidents are becoming more frequent: In 2022, API-related security incidents rose by 40%, and bot-related security incidents spiked by 88%. These increases were fueled by a rise in digital transactions, the expanding use of APIs, and geopolitical tensions like the Russia-Ukraine conflict. In the following year 2023, as digital traffic began to stabilize and the pandemic-driven surge in internet activity subsided, the frequency of these incidents moderated. API-related security incidents grew by 9%, while bot-related security incidents jumped by 28%. The overall upward trend in attacks highlights the growing persistence and frequency of these threats.
​ Insecure APIs and bot attacks pose a significant threat to large enterprises: Companies with revenue of at least $100 billion are most likely to suffer security incidents related to insecure APIs or bot attacks. These threats constitute up to 26% of all security incidents experienced by such businesses.
​ Countries around the globe are vulnerable to API and bot attacks: Brazil experienced the highest percentage of events related to insecure APIs or bot attacks, with the threats accounting for up to 32% of all observed security incidents. This was closely followed by France (up to 28%), Japan (up to 28%), and India (up to 26%). While the percentage of events attributed to API and bot-related security incidents was lower in the United States, 66% of all reported events related to vulnerable APIs or automated abuse by bots occurred within the country.

“Reliance on APIs will continue to grow exponentially, driving connections to generative AI applications and large language models,” adds Singh. “At the same time, generative AI will also empower cybercriminals to create sophisticated bots at an accelerated and alarming rate. As API ecosystems expand and bots become more advanced, organizations should anticipate a significant rise in the economic impact of automated API abuse by bots unless proactive measures are taken.”

Additional Information:

Download a copy of the “The Economic Impact of API and Bot Attacks” report for additional insights on the business impact of API and bot-related security incidents. See how Imperva Advanced Bot Protection and API Security can protect websites, applications, and APIs from automated attacks and without affecting the flow of business-critical traffic. Read the Imperva Blog for the latest product and solution news, and threat intelligence from Imperva Threat Research.

[1] The overall total does not double count events that are both API and bot related.

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Generic%20banner%20option%204.png Documents [Prezly] Vulnerable APIs and Bot Attacks Costing Businesses up to $186 Billion Annually.pdf Contacts Marion Bonnet, Press and social media manager, Security and Cyber 18 Sep 2024 Type Press release Structure Defence and Security Digital Identity and Security Security Group Imperva, a Thales company, the cybersecurity leader that protects critical applications, APIs, and data, anywhere at scale, releases the “Economic Impact of API and Bot Attacks” report. The analysis of more than 161,000 unique cybersecurity incidents and investigates the rising global costs of vulnerable or insecure APIs and automated abuse by bots, two security threats that are increasingly interconnected and prevalent. The report estimates that API insecurity and bot attacks result in up to $186[1] billion for businesses around the world. prezly_689106_thumbnail.jpg Hide from search engines Off Prezly ID 689106 Prezly UUID 9e427ee7-9381-452b-be9a-fac03b89e011 Prezly url https://thales-group.prezly.com/vulnerable-apis-and-bot-attacks-costing-businesses-up-to-186-billion-annually Wed, 09/18/2024 - 17:00 Don’t overwrite with Prezly data Off

Tokeny Solutions

Tokeny’s Talent | Shurong

The post Tokeny’s Talent | Shurong appeared first on Tokeny.
Tokeny's Talent 18 September 2024 Tokeny’s Talent | Shurong Tokeny’s Talent | Shurong Tokeny's Talent 18 September 2024 Shurong Li is Head of Marketing at Tokeny, she joined the company in 2018. Reflecting on the 6-year Journey You’ve been with the company for six years now, starting as an intern and now leading the department. How has the company supported your growth during this time?

Giving up is how we define failure. Thanks to this spirit, we’ve been given a safe and supportive environment to try, fail, and try again until we succeed. It feels like each single one of us is an independent entrepreneur within a group of entrepreneur communities. We embrace failures and we celebrate wins. I’m really thankful for being part of the team where I feel fully empowered and trusted throughout each stage, from the beginning as a junior to now holding a leadership position. This drives me to push my limits to keep learning and growing.

Tokeny’s Culture Involvement Tokeny has grown significantly since you joined. How has the company culture evolved in your opinion?

In the early days, when we were just a team of 8, everything felt like an exciting experiment. We moved extremely fast, and exchanged ideas freely, and every day was a new opportunity to innovate. Our culture thrived on flexibility and passion. As we grew to over 40, things began to shift. All departments grew bigger, and communication across teams became more complex.

“I’m impressed by how we’ve managed to keep our agility, responding quickly to new challenges while becoming more organized.” “I’m impressed by how we’ve managed to keep our agility, responding quickly to new challenges while becoming more organized.”

We’ve introduced standardized processes and well-defined personal objectives and key desired results that have brought order without stifling creativity. Our culture, in my view, has only become stronger. We’ve managed to keep a safe and supportive environment where everyone can do their best work, knowing they have a clear goal to achieve and a process to follow. It’s this balance between agility and structure that makes me excited about our future.

The last point I’d like to emphasize is that our team is committed to both achieving results and maintaining work-life balance. When urgent matters arise, we tackle them swiftly and efficiently, ensuring that nothing is left unresolved. At the same time, if no urgent issues arise, we ensure that our team members can fully enjoy their holidays, recognizing that recharging is essential for sustained performance. Management genuinely puts people first, valuing rest as a way to prevent burnout and keep energy levels high. This balanced approach allows us to consistently deliver exceptional results while keeping the team motivated and at their best.

Leadership Style Now that you’re the Head of Marketing, how would you describe your leadership style, and how do you ensure that the collaborative and supportive environment you first experienced continues to thrive under your guidance?

When I think about leadership, I often reflect on the lessons I’ve learned from those who have led me, especially our CEO, Luc. Luc is a true visionary, someone who inspires everyone around him. Watching him lead, I realized that being a great leader isn’t just about giving orders, it’s about being a supportive mentor who inspires others to reach their full potential. I would describe my leadership style as such as well.

“Being a great leader isn’t just about giving orders, it’s about being a supportive mentor who inspires others to reach their full potential.” “Being a great leader isn’t just about giving orders, it’s about being a supportive mentor who inspires others to reach their full potential.”

I believe in the power of starting with why. Whenever I give guidance, I always begin by explaining why a task is important and why we should approach it a certain way. This approach helps my team understand the bigger picture and see how their work fits into our overall goals. And often, it sparks new ideas and better solutions, as team members feel empowered to contribute their perspectives.

At the heart of my leadership philosophy is a simple belief: it’s all about caring for people. I strive to create an environment where everyone feels safe, supported, and valued. To me, leadership is about being human and supporting people to achieve great things. After all, when people feel cared for and understood, they are more likely to bring their best selves to their work.

In the end, my goal is to inspire my team, just as Luc has inspired me, to believe in themselves and in the impact they can make. I think all of the leaders at Tokeny have a similar approach, and this is what drives Tokeny to achieve extraordinary things. By building a culture of trust, clarity, and shared purpose, we honor one of Tokeny’s core values: putting people first.

Company Values in Practice You mentioned in your previous interview how much you appreciated the creativity and boldness the company encourages. Can you share an example of a project where you or your team took a bold approach, and how it was received?

Creating the non-profit ERC-3643 Association was one of the boldest steps we’ve taken, driven by our vision of unlocking open finance for everyone. At first, it seemed like we might have more to lose than to gain. By forming a non-profit association, we welcomed contributions from anyone. We knew it would drive opportunities for others and create more competition.  However, we believed in a bigger picture.

Our mission is to break down the silos of finance, because that’s the only way we, as an industry, can achieve what we all want: an open and connected financial world. Forming a non-profit association was a bold decision we made to reach that goal, and it paid off.

Today, ERC-3643 is recognized by financial institutions and governments as a market standard. More than 75 members have joined the association, and numerous partnerships have been formed thanks to interoperability. It has even been awarded the “Best Initiative of the Year” by Deloitte. We will continue contributing to the industry through this association to accelerate the adoption of onchain finance.

Reflections and Future Outlook If you could give advice to your younger self, just starting out at Tokeny, what would it be?

If I could give my younger self advice, it would be to start long-term content creation much earlier. I began focusing on it only four years ago, but writing is valuable for all professionals, not just those in marketing. It helps solidify knowledge, clarify thoughts, and deepen understanding. I’ve grown to enjoy it so much that it’s become a personal habit, where I write and reflect regularly. It’s not just about writing more, but about crafting concise and impactful communication. This practice has sharpened my thinking and helped me quickly dive into and understand any new topic.

As someone who has been with the company through significant milestones, where do you see Tokeny going in the next five years, and how do you envision your role evolving in that journey?

Over the past seven years, I’ve witnessed the market shift from having no institutional interest to now working closely with many of them. The transformation has been remarkable. We’re currently at the early adopter stage of the technology adoption curve of tokenization. The next five years will be the most exciting yet, as we expect massive adoption to really take off. Looking ahead, I see Tokeny playing a pivotal role in accelerating the adoption of onchain finance.

“I truly believe this is just the beginning of our journey to thrive.” “I truly believe this is just the beginning of our journey to thrive.”

While the technology itself is not an issue when working with a provider like us, operational shifts are challenging because they change how the value chain works, requiring each stakeholder to adapt. As a tech provider, our goal is to make the integration and operation processes as seamless as possible, ensuring everything runs smoothly. However, for this to happen, everyone in the value chain needs to understand the benefits of tokenized assets, as well as the risks of not adopting them, so they don’t become blockers to adoption.

All it takes is market education. My role will continue to create educational and engaging content that people are genuinely interested in reading, watching, and sharing. By spreading knowledge, we can guide professionals to thrive and understand why they should embrace onchain finance and how they can succeed in this transition. Together, we can drive the future of finance, transforming the way the world transfers and manages value.

More Stories  Tokeny’s Talent | Shurong 18 September 2024 Tokeny’s Talent | Omobola 25 July 2024 Tokeny’s Talent | Cristian 13 June 2024 Tokeny’s Talent | Adrian 15 May 2024 Tokeny’s Talent | Fedor 10 April 2024 Tokeny’s Talent | Fabio 16 February 2024 Tokeny’s Talent | Gonzalo 24 November 2023 Tokeny’s Talent | Denisa 26 October 2023 Tokeny’s Talent | Ali 29 September 2023 Tokeny’s Talent | Tiago 27 July 2023 Join Tokeny Solutions Family We are looking for talents to join us, you can find the opening positions by clicking the button. Available Positions

The post Tokeny’s Talent | Shurong appeared first on Tokeny.


Dock

$CHEQ $DOCK Token Merger Approved: An Alliance for Decentralized Identity Adoption

We are thrilled to announce that the token merger between cheqd and Dock has been officially approved by both $CHEQ and $DOCK holders.  By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and

We are thrilled to announce that the token merger between cheqd and Dock has been officially approved by both $CHEQ and $DOCK holders. 

By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and organizations worldwide with secure and trusted digital identities.

Dock and cheqd will continue as independent companies serving distinct market sectors in unique ways. cheqd will continue to advance payment infrastructure and network-layer functionalities, while Dock will continue focused on issuance, verification, and monetization of verifiable credentials for Identity Solution Providers, including KYC, background check, and biometrics companies through their Certs platform. Read more about the alliance here.

With the approval of this token merger, $DOCK tokens will be swapped for $CHEQ tokens at the ratio of 18.5178 $DOCK to 1 $CHEQ. This is based on a 15 day historic average using the closing prices of both tokens. The migration is estimated to commence in the latter half of Q4. More details will be available soon.

Dock’s historical and future transactions will be migrated to the cheqd blockchain, guaranteeing continuity and providing enhanced functionality for all ongoing Dock operations.

Browse our FAQ to learn more about the alliance and token merger.

Majority Approval from cheqd and Dock Communities

The governance vote resulted in a 100% approval from both $CHEQ holders and $DOCK holders

This strong backing from both communities reflects the shared belief in the potential of this merger to unlock new opportunities for all parties involved and drive the future of decentralized identity.

What Does the Merger Mean for Dock and cheqd?

The two companies—cheqd and Dock—will remain independent legal entities, with projects and roadmaps remaining largely unchanged.

One of the most significant benefits of this collaboration is the increased interoperability it will provide. Dock will transition to a blockchain that is already being utilized by key players in the digital identity sector. By aligning ourselves with a widely adopted blockchain, we are positioning our solutions within a broader, interconnected ecosystem.

As a $DOCK token holder, this merger with $CHEQ brings a host of compelling benefits that enhance both the value and utility of your tokens, such as increased token liquidity, access to enhanced resources and tokenomics that benefit holders. Read all about the holder benefits.

Additionally, Dock’s migration of network traffic to cheqd will significantly boost activity on the cheqd network, bringing approximately 300% more traffic to the mainnet and 50% to the testnet. This will accelerate network effects, driving more adoption across industries and use cases.

This collaboration is set to increase demand for $CHEQ, as more identity transactions will occur across cheqd’s infrastructure, supporting a broader ecosystem of verifiable credentials and increasing token burn.The partnership of cheqd and Dock’s established ecosystems will forge a powerful network of over 100,000 community members and hundreds of active partners.

What Happens Next?

As we move forward, cheqd and Dock will announce the commencement dates for the following key activities:

Token Migration: The migration of $DOCK tokens to $CHEQ is expected to begin in the latter half of Q4. Porting Blockchain Transactions: Existing blockchain transactions on the Dock chain will be ported to the cheqd blockchain.

The cheqd and Dock teams will work closely with exchanges to facilitate the token migration, ensuring a seamless transition for all trading activities.

Post-migration, Dock will default to using the cheqd network, though we will still support clients who request to use an alternative chain, multiple blockchains, or ledgerless identity systems. We believe defaulting to the cheqd chain will ensure that Dock continues to operate within the most advanced and secure decentralized ID ecosystem.


A Defining Moment for the Decentralised Identity Market

By merging the $DOCK token with $CHEQ, we are unlocking unprecedented opportunities for our community, positioning you at the cutting edge of decentralized identity innovation.

The future of decentralized digital identity is bright, and with your $CHEQ tokens, you'll be part of a dynamic, growing ecosystem that is set to lead the industry. 

Dock and cheqd will shape a world where secure, verifiable credentials are the norm, and your involvement is key to making this vision a reality. The journey ahead is filled with potential, and we are thrilled to have you with us as we pave the way for the next era of digital identity.


PingTalk

Best Buy Boosts Employee, Vendor, Contractor Efficiency, Experience

Best Buy enhances efficiency and security for employees, vendors, and contractors with Ping Identity's IAM solutions. Learn how in this detailed customer case study.

 

Walking into a Best Buy is a consumer electronics dream. Upon entering, you see the familiar and welcoming Best Buy “blue shirts” and know your electronic goals will be met. In the store, you will also see other shirts with logos of Best Buy partners like Apple, Microsoft, Samsung, and more, who collaborate with the company to help customers meet their varying technological needs. It’s truly a pretty awesome one-stop-shop experience, but did you ever stop to think about the complexity of the systems that allow these groups to work together in a shared space? For example, a Microsoft employee will probably not want to use an iPad, and an Apple employee should not be able to see Microsoft customer information and sales data. There are countless complexities with all of these vendors operating together in the same store. And all of these complexities are occurring in more than 1,100 locations globally.

 

Fortune 100 consumer electronics retailer Best Buy has not only nailed these very complex and numerous use cases, but it has done so with astounding efficiency. I recently had the pleasure of chatting with Greg Handrick, Director of Identity and Access Management (IAM) and Cryptography, and Vinodh Rajagopalan, Associate  Engineering Director of IAM, and they explained how identity is driving efficiencies and secure yet pleasant user experiences for their employees, vendors and more.

 

Greg set the stage by explaining, “IAM is 100% centralized at Best Buy. Our team has global responsibility for all enterprise identities, which includes all employees, contractors, non-human accounts, bot accounts and vendors. We have a total of 180,000 identities under management.”

 

Best Buy began its journey with Ping in 2009, using PingFederate with a very niche use case. By 2020, Best Buy was experiencing issues with its IAM infrastructure, which consisted of Oracle Access Manager, Microsoft ADFS, SecureAuth and some homegrown solutions, all running on-premises. Vinodh explained, “Things were too complex. We didn’t have great support from our existing vendors, and we also began finding some bugs. But what was really important was our increasing need for flexibility and the ability to customize certain solutions.”


Thales Group

Thales Australia and Underwood Innovation Labs sign an MoU to establish a collaborative Advanced Air Mobility (AAM) Centre of Excellence in Queensland, Australia

Thales Australia and Underwood Innovation Labs sign an MoU to establish a collaborative Advanced Air Mobility (AAM) Centre of Excellence in Queensland, Australia prezly Wed, 09/18/2024 - 06:00 Thales and Underwood Innovation Labs, the inaugural Australian government-backed innovation lab, signed a Memorandum of Understanding (MOU) to establish an Advanced Air Mobility Centre of Ex
Thales Australia and Underwood Innovation Labs sign an MoU to establish a collaborative Advanced Air Mobility (AAM) Centre of Excellence in Queensland, Australia prezly Wed, 09/18/2024 - 06:00 Thales and Underwood Innovation Labs, the inaugural Australian government-backed innovation lab, signed a Memorandum of Understanding (MOU) to establish an Advanced Air Mobility Centre of Excellence (AAM COE). Located in Queensland, Australia, the AAM COE will facilitate the growth of a scalable and collaborative UAV ecosystem in Advanced Air Mobility, create high-skilled jobs, and provide access to indoor, virtual and physical airspace for the safe design and testing of Remote Piloted Aircraft Systems (RPAS). ​ The establishment of the AAM COE aligns with the Australian Government's priorities identified in the Aviation White Paper regarding the Advanced Air Mobility (AAM) sector.
©Thales

Thales and Underwood Innovation Labs signed a Memorandum of Understanding (MOU) to establish an AAM Centre of Excellence. The AAM COE, supported by the Mayor of Logan City, Hon Jon Raven, will operate as a membership-based, open-ecosystem, enabling organisations to utilize and access state-of-the-art innovation, technology and resources. ​

The location of the AAM COE, in South East Queensland, is one of Australia’s fastest growing regions, with population numbers expected to reach 5.4million by 2041. As a key economic hub, the establishment of a centre of excellence will cultivate advanced technology and develop skills for Queensland’s future workforce.

The AAM COE is modelled after a successful initiative in Paris, France, known as Centre d’Excellence Drones Ile De France (CEDIF). CEDIF operates with an approved 40km Beyond Visual Line of Sight (BVLOS) airspace corridor extending from Saint-Quentin en Yvelines to Bretigny sur Orge. Supported by Thales, Eurocontrol, and Systematic, CEDIF aims to provide a comprehensive platform for incubating, validating, and industrializing all aspects of drone activities, both direct and indirect.

 

"Thales is thrilled to be the initial founding partner in establishing the forthcoming innovation ecosystem centred on a Centre of Excellence for AAM in Queensland, alongside Underwood Innovation Lab and the City of Logan. Our shared commitment to trust, innovation, and results will unite innovators in addressing everyday challenges, integrating drones and other advanced air mobility systems safely into our daily routines, and contributing to the decarbonization of the future aviation industry." - Bobby Pavlickovski, Head of Uncrewed Services, Thales Australia,.

“Underwood Innovation Lab is delighted to be partnering with Thales Australia to establish and deliver this catalytic project for Queensland which will propel the Advanced Air Mobility sector in the State and ultimately Nationally. As a first in kind, local government backed innovation Lab this project aligns well with the UiLab mission to positively impact the Australian innovation ecosystem through strategic global partnerships and transformative projects such as this that will create high-value jobs, attract further investment, and ultimately improve National productivity.” - Dr Paul Mathiesen (UiLab Chief Innovation Officer).

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/Design%20sans%20titre%20%2821%29.png Documents [Prezly] Thales and the Underwood Innovation Labs sign a MOU to establish an AAM Centre of Excellence.pdf Contacts Alice Pruvot, Head of Media Relations, Aeronautics & Defense Australia & New Zealand Press Enquiries 18 Sep 2024 Type Press release Structure Aerospace Australia Thales and Underwood Innovation Labs signed a Memorandum of Understanding (MOU) to establish an AAM Centre of Excellence. The AAM COE, supported by the Mayor of Logan City, Hon Jon Raven, will operate as a membership-based, open-ecosystem, enabling organisations to utilize and access state-of-the-art innovation, technology and resources. prezly_689369_thumbnail.jpg Hide from search engines Off Prezly ID 689369 Prezly UUID 059c9fd1-1a11-4794-b0f6-a6562a0462b7 Prezly url https://thales-group.prezly.com/thales-australia-and-underwood-innovation-labs-sign-an-mou-to-establish-a-collaborative-advanced-air-mobility-aam-centre-of-excellence-inqueensland-australia Wed, 09/18/2024 - 08:00 Don’t overwrite with Prezly data Off

BlueSky

Bluesky’s Current Efforts on Trust and Safety

This is a big quarter for Trust and Safety at Bluesky, as we work on a large number of improvements. Here’s a preview of everything that is in progress.

In August, we published a blog post on anti-toxicity features that Bluesky’s product team designed with the Trust & Safety team. You can read that blog post here.

Trust and Safety (T&S) encompasses how we make all aspects of the Bluesky app a safe and enjoyable experience for users, covering the processes, policies, and the product. As Bluesky’s Head of T&S, my goal is to understand where the biggest gaps in user needs are and how to address them to ensure that people have a pleasant experience on Bluesky.

This is a big quarter for Trust and Safety at Bluesky, as we work on a large number of improvements. Here’s a preview of everything that is in progress!

Ban evasion and multi-account detection capabilities

People deserve to have an experience free from harassment on Bluesky. While harassers can be infinitely creative in how they avoid detection, we’re working on tooling to reduce their impact. For example, we’re adding more friction to their ability to create new accounts. We currently register users for additional defenses when we see a pattern of new account harassment, but in the future, we'll be able to better detect and surface when multiple new malicious accounts are created and managed by the same user.

Toxicity detection experiments

Addressing toxicity is one of the biggest challenges on social media. On Bluesky, the two areas that made up 50% of user reports in the past quarter are for content that is rude and for accounts that are fake, scams, or spam. Rude content especially can drive people away from forming connections, posting, or engaging for fear of attacks and dogpiles.

In our first experiment, we are attempting to detect toxicity in replies, since user reports indicate that is where they experience the most harm. We’ll be detecting rude replies, and surfacing them to mods, then eventually reducing their visibility in the app. Repeated rude labels on content will lead to account level labels, and suspensions. This will be a building block for detecting group harassment and dog-piling of accounts.

Automating spam and fake account removals

Harm on social media can happen quickly. For example, if a fake impersonation account asks for a fund transfer, it might take only a matter of minutes before someone falls for a scam. We’re launching a pilot project to automatically detect when an account is clearly fake, scamming, or spamming users to hopefully reduce the likelihood this happens. We’re hoping that this project, paired with our moderation team, can cut down the action time for these reports to within seconds of receiving a report.

Feedback on moderation reports

In the coming months, we’re working to move away from communicating with users about violations via email to communicating through the Bluesky app. Users will receive notices of infractions or labels within the app. We’ll also send outcomes of your own reports through the app as well.

Geography-specific labels

In some cases, content or accounts may be allowed under Bluesky's Community Guidelines but violate local laws in certain countries. To balance freedom of speech with legal compliance, we are introducing geography-specific labels. When we receive a valid legal request from a court or government to remove content, we may limit access to that content for users in that area. This allows Bluesky's moderation service to maintain flexibility in creating a space for free expression, while also ensuring legal compliance so that Bluesky may continue to operate as a service in those geographies. This feature will be introduced on a country-by-country basis, and we will aim to inform users about the source of legal requests whenever legally possible.

Designing video on Bluesky for safety

We recently launched video on Bluesky, and the T&S team has been working with the product team to ensure the feature is launched safely.

Here’s a look at how T&S works together with product. The product team puts together a document listing what they intend to build. Trust and Safety then assesses the risks associated with the feature, and makes recommendations to minimize harms that are most likely from that feature. This ensures that we anticipate harms and integrate mitigations before launch.

For video, Trust & Safety has incorporated various features like being able to turn off auto-play or ensuring that reports can be made and labels applied to content. You can read more about the available safety tooling for video here.

We try to be pragmatic in building the safety elements that most people will need prior to launch, but there’s always room for more improvements in response to user feedback. So after a product launches, we pay close attention to reports and support requests as we improve the feature.

List changes to restrict abuse

Lists are a powerful way to have more control over your experience on Bluesky. You’re able to curate your favorite users, or to filter individuals out from your Bluesky experience — and to share those lists with others, so they can benefit from your curation as well.

However, sometimes bad actors use lists to harass others and violate our rules, so we’re making some changes. We have recently updated starter packs to remove members when blocked, and are doing the same for curated lists. Prior to this, the Bluesky Trust & Safety team has only been able to take down entire lists as a moderation action, instead of removing specific individuals. For moderation lists, this would mean that we’d unintentionally erase blocks. Now, when you block the creator of a list that you are on, you will get removed from the list. This behavior doesn’t apply to moderation lists since that would defeat their purpose.

We will also be starting a widespread effort to identify lists with toxic and abusive names or descriptions. Lists with names or descriptions that violate the Bluesky Community Guidelines will be hidden in the app until or unless their creator modifies them to comply with our rules. We will also take further action against users that repeatedly create abusive lists.

Lists continue to be an area of active discussion and development for our team to find the right balance for user safety.

Prioritizing User Concerns

This section provides some transparency on how we prioritize T&S efforts across the organization.

We read your concerns raised via reports, emails, or mentions to @safety.bsky.app. Our overall framework is asking how often something happens vs how harmful it is. Then we focus on addressing high-harm/high-frequency issues while also tracking edge cases that could result in serious harm to a few users.

For example, a small number of accounts have been harassing a few people on the app by creating multiple accounts and targeting the user repeatedly. Although this happens to a tiny fraction of users, it causes enough continual harm that we want to take action to prevent this abuse.

As always, your feedback is welcome through comments or by reaching out to moderation@blueskyweb.xyz.

Tuesday, 17. September 2024

Safle Wallet

Safle Community Explorer Carnival: Your Epic Adventure Begins!

Ready to explore the future of Web3? The Safle Community Explorer Carnival is launching soon, bringing you an exciting series of challenges designed to unlock the full potential of Safle Wallet and Safle Lens. Each challenge takes you deeper into the Web3 universe, where you’ll explore new chains, discover groundbreaking dApps, and level up with valuable XP! 🌌 Compete to climb the leaderboar

Ready to explore the future of Web3? The Safle Community Explorer Carnival is launching soon, bringing you an exciting series of challenges designed to unlock the full potential of Safle Wallet and Safle Lens. Each challenge takes you deeper into the Web3 universe, where you’ll explore new chains, discover groundbreaking dApps, and level up with valuable XP! 🌌

Compete to climb the leaderboard and earn from a massive rewards pool in Safle Tokens! Don’t miss your chance to be a top explorer and shape the future of Web3!

Here’s a sneak peek at the action-packed quests coming your way:

🚀 Ignite the Safle Hype: The Saflenaut Journey Begins!

Get your engines roaring because the carnival is just around the corner — and guess what? YOU are the spark to ignite the buzz! Ready to suit up and blast off into the Web3 cosmos?

Think you’ve got your GAME ON? Welcome to the Saflenaut Mission — where your Web3 universe takes off. The more you rally, the bigger the adventure!

💥 Rootstock Troop

Gear up for an explosive mission on the Rootstock chain! Navigate, explore, and interact with dApps in a whole new way as you unlock the power of Safle Wallet’s latest integration. Adventure awaits those brave enough to take the plunge.

🚀 The BEVM Rocket

Strap in for a rocket-fueled journey to the BEVM chain! This isn’t just any mission — it’s your chance to discover how Safle Wallet takes cross-chain functionality to the next level. Ready to fire up those engines?

🏔 Avalanche Explorer

Prepare to conquer the Avalanche! Scale new heights and unlock powerful rewards as you interact with dApps in Safle Wallet. Are you ready to make your mark in the Avalanche ecosystem?

🔮 Polygon zkEVM Pioneer

The future of Web3 scalability is here, and YOU can be one of the first to explore it! Enter the Polygon zkEVM frontier and uncover the cutting-edge technology Safle has seamlessly integrated. Your pioneering spirit is about to be rewarded!

🌠 Base Voyager

Ever wanted to be a true explorer of the Base chain? Now’s your chance! Mint NFTs, engage in games, and experience the magic of Web3 on an entirely new level — all from the comfort of your Safle Wallet.

👁️ The Safle Lens Explorer

Prepare to see your portfolio like never before with Safle Lens! Whether it’s detecting spam tokens or NFT, interacting with our AI, or uncovering hidden gems, this quest will open your eyes to Safle’s most exciting features yet.

🏆 And There’s More!

Complete multiple quests, level up with multipliers, and claim your share of an airdrop worth 15k USD in USDT, Safle Tokens & RBTC! As you journey through the Carnival, the rewards will keep stacking up. The more you play, the bigger your prize!

This is no ordinary quest — it’s an epic adventure. Mark your calendars, gather your crew, and get ready to level up in the Safle universe. The Safle Community Explorer Carnival is about to go live… will you rise to the challenge?

Keep a lookout 👉🏻 Follow Safle

Join the community 👉🏻 Join Discord


auth0

Auth0 Forms Is Now Generally Available!

We're excited to announce the general availability of Auth0 Forms, a powerful visual editor that empowers you to create custom, dynamic forms that integrate seamlessly with your authentication flows.
We're excited to announce the general availability of Auth0 Forms, a powerful visual editor that empowers you to create custom, dynamic forms that integrate seamlessly with your authentication flows.

Indicio

Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options

The post Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options appeared first on Indicio.

By Ken Ebert

As more decentralized identity and verifiable credential solutions get to market, many vendors only offer a Software-as-a-Service (SaaS) because of its ease of use and scalability. However, when it comes to managing verifiable credentials containing personal data, businesses, and especially governments, need to carefully assess where the platforms or software they depend on are hosted. In this blog, we’ll talk about how our platform for decentralized identity, Indicio Proven, supports requirements for data locality, compliance with regional regulations, and the security of personal data.

Assessment of data locality and regulatory compliance

Data residency is a key consideration when using a SaaS solution for verifiable credentials. A SaaS model for deployment may store or process data in multiple regions globally. While vendors often offer region-specific hosting, there are still challenges to ensuring that personal data is only processed in authorized geographic locations. This issue becomes even more pressing for government agencies and sectors dealing with sensitive citizen information, where the stakes for compliance are higher.

Governments around the world are beginning to operate under strict data sovereignty laws that dictate where personal data can be processed and stored. Regulations like the General Data Protection Regulation (GDPR) in the European Union, Australia’s Privacy Act, or Canada’s PIPEDA create stringent requirements for how personal data  is handled, especially when it comes to cross-border data flows.

For organizations in Europe, the eIDAS (Electronic Identification, Authentication and Trust Services) regulation is the framework shaping the future of digital identity. Compliance with eIDAS and other regional regulations requires careful attention to where and how sensitive data is processed and stored. 

For many organizations, the risks associated with using a SaaS model hosted in a foreign jurisdiction may outweigh the benefits, particularly if the service provider cannot guarantee that data will remain within the required geographical boundaries.

On-premise deployment: The case for control

For businesses and governments that require the strictest control over data processing, an on-premise deployment offers a secure alternative. This model allows organizations to manage verifiable credential platforms and solutions within their own environment, ensuring that sensitive personal data never leaves their infrastructure. In an on-premise deployment, verifiable credentials and the underlying issuance and verification infrastructure are fully managed, controlled, and protected by the organization, minimizing the risks of external breaches or compliance failures.

On-premise deployments are particularly appealing to financial services and healthcare, where stringent data protection regulations demand maximum control over personal data. 

Indicio’s Differentiator: Offering Both SaaS and On-Premises Solutions

Despite the clear advantages of on-premise deployment for critical data applications, few vendors offer on-premise deployment as an option. This is where Indicio stands out as a solution provider, with both SaaS and on-premises deployment options for businesses and governments to  meet their unique operational, privacy, and regulatory needs.

For those organizations that need the convenience and scalability of a cloud-based solution, Indicio Proven can be used as a fully-managed service. We handle the operational complexity of running the decentralized identity infrastructure, including regular maintenance, security updates, and compliance with global data protection regulations. This allows our clients to focus on their core operations while knowing that their verifiable credential solution is secure and up to date.

For organizations with stricter data-control requirements, Indicio Proven can be deployed on-premise to ensure that the  personal data in verifiable credentials is never processed or stored outside their control.

The benefits of Indicio’s flexible deployment approach

By offering both SaaS and on-premises deployment options, Indicio provides organizations with the flexibility to choose the model that works best for them. Here are the key benefits of working with Indicio:

1. Tailored to Your Needs: Whether your organization prioritizes the ease and scalability of SaaS or requires the security and control of on-premises, Indicio has a solution that fits. We understand that no two organizations are the same, and our dual deployment model ensures that you don’t have to compromise on security or convenience.

2. Operational Excellence: For our SaaS customers, Indicio takes on the full responsibility of managing the infrastructure for issuing and verifying credentials. We handle maintenance, upgrades, and security patches, ensuring that your system runs smoothly and securely at all times. Our superb customer service ensures that you receive the support you need when you need it.

3. On-premise control: For organizations that require more control, Indicio’s on-premises option allows them to manage their Indicio Proven instance  within their own environments. This deployment gives businesses and governments the ability to safeguard data, maintain compliance, and reduce risks associated with external data handling.

4. Regulatory compliance: Whether SaaS or on-premise, Indicio’s solutions are built with compliance in mind. We ensure that our systems meet the highest standards of security and data protection, giving you confidence that your decentralized identity solution will align with regulations like eIDAS, GDPR, and other regional frameworks.

Conclusion

As decentralized identity and verifiable credentials continue to shape the future of secure online interactions, businesses and governments must carefully evaluate their deployment options. SaaS models offer scalability and ease, but for organizations with stringent data control requirements, an on-premises deployment may be the best choice.

Indicio’s unique ability to provide both SaaS and on-premises solutions sets us apart in the market. Whether you need the operational simplicity of a managed SaaS environment or the control of an on-premises deployment, Indicio offers a flexible solution tailored to your needs, ensuring the security, compliance, and reliability of your decentralized identity infrastructure.

In an evolving regulatory landscape, Indicio is here to help you navigate the complexities of decentralized identity—offering superb customer service, operational excellence, and the flexibility to choose the deployment model that works best for you.

Contact us to learn more about how Indicio can support your verifiable credential deployment needs. 

###

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post Choosing the right deployment for decentralized identity: Why Indicio offers SaaS as well as on-premise options appeared first on Indicio.


This week in identity

E57 - Back to School 2024 Episode

Summary In this episode of the Week in Identity podcast, Simon and David discuss the latest trends and developments in identity security, including market activity, funding rounds, and significant acquisitions. They delve into the importance of NIST guidelines, the rise of non-human identity (NHI), and the implications of recent acquisitions by MasterCard and Salesforce. The conversation highlig

Summary

In this episode of the Week in Identity podcast, Simon and David discuss the latest trends and developments in identity security, including market activity, funding rounds, and significant acquisitions. They delve into the importance of NIST guidelines, the rise of non-human identity (NHI), and the implications of recent acquisitions by MasterCard and Salesforce. The conversation highlights the evolving landscape of identity management and the critical need for organizations to adapt to new challenges in cybersecurity.


Chapters

00:00 Introduction to the Week in Identity Podcast

03:52 NIST Guidelines and Identity Assurance

06:30 Aembit Funding Rounds and Non-Human Identity

13:42 Acquisitions in Identity: IndyKite and 3Edges

20:17 MasterCard and Recorded Future

26:39 Salesforce and Own Data







KuppingerCole

Building Resilient IAM Systems: The Limits of IGA Customization

by Martin Kuppinger Customizing Identity Governance & Administration (IGA) within Identity & Access Management (IAM) is a common practice, but how much is too much? This question becomes more pertinent as organizations increasingly seek to adapt COTS (Commercial Off-The-Shelf) and IDaaS (Identity-as-a-Service) solutions to their specific needs. The tendency to “over-customize” remains pre

by Martin Kuppinger

Customizing Identity Governance & Administration (IGA) within Identity & Access Management (IAM) is a common practice, but how much is too much? This question becomes more pertinent as organizations increasingly seek to adapt COTS (Commercial Off-The-Shelf) and IDaaS (Identity-as-a-Service) solutions to their specific needs. The tendency to “over-customize” remains prevalent, even as IDaaS solutions evolve. Iso, let us explore when customization makes sense and, more importantly, how to avoid the pitfalls that come with excessive modification.

Customization vs. Configuration: Let’s Clarify 

First, let’s clarify what we mean by “customization.” Customization involves writing new code—whether through traditional coding, low-code, or no-code platforms. Configuration, on the other hand, refers to adjusting settings within the system, ideally through the user interface or, if necessary, via configuration files. While low-code/no-code approaches have gained popularity, they don’t entirely mitigate the risks associated with customization, especially without proper documentation, version control, and staging environments in place. 

Why Customize IGA Solutions at All? 

The first and most important questions to ask are: Do we need customization in IGA solutions, and to what extent? These are two separate questions. Based on my experience, the amount of customization typically required is far less than many organizations assume. 

Most IAM processes, including the management of Joiner, Mover, Leaver (JML) activities, can be standardized. Yes, there are variations and organization-specific requirements, but these are often at the detail level: How many approvers are required? Should approvals be sequential or parallel? Even these specifics can often be addressed using best practices. Several vendors provide process frameworks, or you can consult experts for tailored frameworks that align with your organization’s needs. 

At the core, every organization needs to onboard employees, manage their access, handle job transitions, and de-provision access when necessary. These are universal requirements, and best practices can address them efficiently. Yet, many organizations still customize excessively, resulting in unnecessary complexity and cost. 

The Real Reasons for Customization 

There are several reasons organizations end up with highly customized IGA solutions: 

Legacy Processes: Many organizations are reluctant to let go of legacy processes, opting to map outdated workflows onto new systems. Worse, when organizations have multiple sites with their own “ways of doing things,” customization often spirals out of control.  Lack of Standard Frameworks: While process frameworks exist, not enough vendors offer them out-of-the-box, forcing organizations to build their own—often from scratch.  System Integrators: Cynics might argue that system integrators benefit from customization projects. However, this overlooks the downsides: dissatisfied customers, extended project timelines, and increased risk.  Does Switching Tools Solve the Problem? 

Many organizations, when faced with a failing IAM (IGA) system, rush to replace the tool. While a tool change might seem like the solution, it rarely is. The problem usually lies in the approach to customization rather than in the tool itself. Even IDaaS, which inherently supports less customization, only mitigates the issue to a certain extent. 

A well-functioning IGA system doesn’t begin with the tool. It begins with clearly defined policies, processes, and organizational requirements. In projects that suffer from over-customization, the underlying issue is often the absence of well-documented processes. Without this groundwork, simply switching tools won’t help. 

Customization: When and How 

I’m not suggesting that customization is entirely unnecessary. There will always be specific needs that require customization. The key is to minimize unnecessary modifications and do it the right way when needed. 

Rethink Processes: Before diving into customization, take a step back and critically evaluate your processes. Do you really need that custom approval workflow, or is there a best practice you can adopt?  Avoid Backend Coding: A frequent source of trouble in IGA projects arises from coding directly against the backend, such as databases. If the database structure changes in a software update, the custom code breaks. Instead, work through APIs or create an abstraction layer to keep customizations stable.  Segregate Custom Code: Modern IGA solutions provide extensive API support and container-based deployments. Custom code should reside in microservices, consuming the APIs of your IGA system. This ensures that updates to the core system don’t break your custom code. Even if the API changes, the impact is isolated to the specific microservice, minimizing disruptions.  Three Steps to Successful IAM (IGA) Customization 

To ensure your IGA solution withstands necessary customization without failing, follow these steps: 

Define Policies and Processes First: Ensure your processes are thoroughly documented and follow best practices before even considering customization.  Minimize Unnecessary Customization: Many customizations provide little real benefit. Focus on what truly adds value to your organization.  Follow Best Practices in Coding: Build customizations on the Identity API layer of your Identity Fabric, isolate them in microservices, and ensure proper documentation and versioning. 

By following these guidelines, you can deliver an IGA solution that meets your organization’s needs while avoiding the risks and costs of over-customization.


Northern Block

Why Northern Block is Joining the Global Acceptance Network

Northern Block joins the Global Acceptance Network to solve governance challenges and build trust across digital ecosystems. The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider. The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Ident

At Northern Block, we are thrilled to announce our participation as a founding member in the newly established Global Acceptance Network (GAN). This initiative is a crucial step towards solving one of the biggest challenges we face in the digital world: the lack of trust in digital interactions.

Think about how seamlessly payments work in the physical world. When you see a Visa logo at a merchant’s point of sale, you immediately know that your Visa card will be accepted. You don’t hesitate to tap your card on the terminal. Unfortunately, we don’t yet have the same level of confidence when it comes to online interactions.

Today’s digital interactions, especially those involving sensitive information like login credentials or payment details, are often fraught with spam, abuse, and fraud. We frequently find ourselves unsure if the transactions we’re engaging in are legitimate. Whether it’s receiving out-of-band communications through SMS or email from organisations claiming to need something urgent from us—often playing on our emotions to compromise our security—we face constant uncertainty. On the other hand, organisations are striving to put their customers at the centre by creating more personalised and seamless experiences, and there’s no better way to achieve this than by obtaining data directly from the source: their customers. However, they need to trust that the data provided has integrity. Without this trust, businesses are forced to implement duplicate verification processes for all their customers, adding friction to the experience and undermining digital transformation efforts.

At Northern Block, we recognized this trust gap early on, which is why we became a founding member of the Trust over IP Foundation in 2020. Our goal wasn’t just to build better technologies but to apply the governance frameworks necessary to solve human trust problems in the digital world. While we’ve made great strides in achieving cryptographic trust—this only solves part of the problem.

Over the past few years, the Trust over IP Foundation has produced significant thought leadership and numerous deliverables, contributing greatly to the evolution of digital trust. Among these achievements, two major innovations stand out as particularly relevant to the Global Acceptance Network:

The Trust Registry Query Protocol: This allows any entity to interact with a trust registry by asking a simple question: “Does Entity X have Authorization Y, in the context of Ecosystem Governance Framework Z?” The Governance Framework Metamodel and toolkit: These tools help capture and implement governance for ecosystems, which have already been successfully deployed in initiatives such as Bhutan’s National Digital Identity Ecosystem and the Global Legal Entity Identifier Foundation (GLEIF).

The Global Acceptance Network builds on the progress made by the Trust over IP Foundation by putting its frameworks into action. While numerous ecosystems today leverage various forms of credentialing and could benefit from sharing data or credentials with others, the real challenge lies in establishing governance standards that ensure these exchanges are trustworthy. This is where GAN comes in.

Much like Visa connects banks, merchants, and consumers within a trusted payment network, GAN’s purpose is to connect digital ecosystems. However, unlike Visa, GAN is not a centralised network and cannot operate as one. Instead, its strength lies in developing relationships with ecosystems and making specific claims about these ecosystems—claims that GAN is uniquely positioned to verify. These claims won’t be about the internal governance or authorities within an ecosystem, but rather about the ecosystem itself and its conformance to GAN’s trust criteria. Over time, as ecosystems are recognised by GAN or linked to the GAN network, the hope is that people and organisations will view these ecosystems as trusted entities, similar to how we implicitly trust the Visa network when we see its logo.

GAN’s ultimate goal is to solve human trust and governance problems by reducing the risks involved in accepting digital credentials or data from outside an organisation’s own ecosystem. This vision is closely aligned with the one we had when the Trust over IP Foundation was formed: a future with thousands of interconnected ecosystems, each with their own governance frameworks. GAN will act as a connector, ensuring that these ecosystems can interact and exchange trusted data, enabling secure, frictionless interactions—just like when we confidently tap our Visa cards at the checkout.

At Northern Block, we provide digital trust solutions that enable ecosystems to produce and manage valuable credentials. As demand for these credentials grows across ecosystems—something the Global Acceptance Network (GAN) can facilitate—the value for our customers increases. Additionally, as a provider of trust registry solutions, which support data models linked to ecosystem authorities and for registry of registries, we aim to ensure that these registries can establish relationships with the GAN trust registry. This further enhances the value and interoperability of the ecosystems we support, driving greater trust and value.

The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider.

The post Why Northern Block is Joining the Global Acceptance Network appeared first on Northern Block | Self Sovereign Identity Solution Provider.


Thales Group

Thales joins the CAC 40 ESG index

Thales joins the CAC 40 ESG index prezly Tue, 09/17/2024 - 14:00 The inclusion of Thales in this index reflects the Group's accelerating progress in terms of social and environmental responsibility. Designed according to the highest international standards, the Group's CSR policy is at the heart of its strategy and perfectly in line with its corporate purpose, adopted in 2020: “Buil
Thales joins the CAC 40 ESG index prezly Tue, 09/17/2024 - 14:00

The inclusion of Thales in this index reflects the Group's accelerating progress in terms of social and environmental responsibility. Designed according to the highest international standards, the Group's CSR policy is at the heart of its strategy and perfectly in line with its corporate purpose, adopted in 2020: “Building a future of we can all trust”.

“We are proud of Thales's inclusion in the CAC 40 ESG index. This is a strong endorsement by the financial community of our extra-financial performance and of our contribution to the protection of society, the planet and individuals,” says Isabelle Simon, General Secretary of Thales.

In 2023, Thales met or exceeded the 6 objectives of its CSR strategy, as defined in 2019 and then revised upwards in 2021:

52% reduction in operational CO2 emissions since 2018 100% deployment of eco-design in new product developments 20.4% women in management positions 86.8% of Group management committees include at least 3 women 100% of exposed employees trained in anti-corruption every two years 36.7% reduction in lost-time accident frequency rate since 2018

In 2025, the Group will unveil a new Horizon 2030 CSR roadmap.

For more information: Thales – Integrated Report 2023-2024 (thalesgroup.com)

About Thales

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.

It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/prezly/images/sans%20A-1920x480px_45.jpg Documents [Prezly] Thales joins the CAC 40 ESG index.pdf Contacts Head of Media Relations Alexandra Boucheron - Thales, Analysts/Investors 17 Sep 2024 Type Press release Structure Investors Group Thales will be included in the CAC 40 ESG index as of market close on Friday, September 20, 2024. This index is designed to direct capital flows to the top 40 French companies in the CAC® Large 60 index demonstrating the best environmental, social and governance (ESG) practices. prezly_689535_thumbnail.jpg Hide from search engines Off Prezly ID 689535 Prezly UUID 30256236-852f-4bc4-a049-d73a0e11c70e Prezly url https://thales-group.prezly.com/thales-joins-the-cac-40-esg-index-7wqqh2 Tue, 09/17/2024 - 16:00 Don’t overwrite with Prezly data Off

Thales Australia’s Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability

Thales Australia’s Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability prezly Tue, 09/17/2024 - 12:00 On 17 September 2024, Thales Australia and Denmark’s Våbenfabrikken announced that they are ​ entering into a strategic cooperation to establish a new industrial capability in Denmark to produce NATO interoperable small arms. The cooper
Thales Australia’s Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability prezly Tue, 09/17/2024 - 12:00 On 17 September 2024, Thales Australia and Denmark’s Våbenfabrikken announced that they are ​ entering into a strategic cooperation to establish a new industrial capability in Denmark to produce NATO interoperable small arms. The cooperation, and resulting outcomes, signal for the first time since the 1960’s that military assault rifles will be produced in Denmark, providing an industrial capability to produce, maintain and sustain small arms in this country. ​ The first tranche will explore options for small arms production in Denmark; commencing with a Danish version of the Australian Combat Assault Rifle (ACAR). The ACAR is currently under development by Thales Australia, and is based on a proven design in-use with allied defence forces and law enforcement agencies. The cooperation between Thales and Våbenfabrikken aligns with the overall Danish defence strategy and supports Thales’s ambition to partner with local industry and develop sovereign supply chains to the benefit of our customers.
©Thales

Under the terms of a Memorandum of Understanding (MoU), Thales and Våbenfabrikken will work together to establish a new industrial capability in Denmark with the aim of producing, maintaining and sustaining interoperable small arms in Denmark. The MoU was signed on 17th of September 2024 at an official signing ceremony at Våbenfabrikken’s premises in Denmark.

Våbenfabrikken is an established gunsmith and weapons training provider, with decades of cumulative experience in the industry. By partnering with Thales Australia, Våbenfabrikken’s ability to support Danish national security priorities will be enhanced through sovereign small arms production, local maintenance capabilities and future skills development.

“We are very excited to work with Thales to bring a NATO small arms production and through-life-support capacity to Denmark for the first time in nearly 60 years. This cooperation is a real opportunity for Våbenfabrikken to grow, in terms of product offerings and staff, to better respond to the priorities and needs of Danish National Security. The DACAR (Danish/Australian Combat Assault Rifle) is a powerful capability and, once in country, over time it will offer additional export opportunities for Denmark.”, says Kim Wiencken, Chairman of the Board of Våbenfabrikken.

“This agreement is the culmination of mutual effort, investment and trust between Våbenfabrikken, Thales Australia and Thales Denmark. This cooperation brings opportunities for both Australia, in respect to regional manufacturing, and Denmark through the provision of small arms assembly, sustainment and maintenance. We’re looking forward to working closely with Våbenfabrikken in the coming years.”, said Matt Duquemein, Director Integrated Weapons System, Thales Australia.

“There is great development in the Danish defence industry, with promising new companies, and it has always been part of Thales’s DNA to support the local defence industry in order to maximise the benefit for our customers. Bringing the Australian Combat Assault Rifle (ACAR) to Denmark is a step towards creating a sovereign small arms capability to support the Danish MoD in the future. With this important cooperation, Thales and Våbenfabrikken will be committed to strengthening the local defence industrial footprint in support of overall Danish national security and security of supply in key areas.” said Martin Soegaard, CEO ofThales Denmark.

/sites/default/files/prezly/images/sans%20A-1920x480px_43.jpg Documents [Prezly] 2024_09_17_ PR_Thales Australias Lithgow Arms partners with Våbenfabrikken to establish Danish small arms industrial capability.pdf Contacts Camille Heck, Thales, Media Relations Land & Naval Defence Anne Sofie Hüttemeier, Communication Manager, Northern & Central Europe 17 Sep 2024 Type Press release Structure Australia Denmark Under the terms of a Memorandum of Understanding (MoU), Thales and Våbenfabrikken will work together to establish a new industrial capability in Denmark with the aim of producing, maintaining and sustaining interoperable small arms in Denmark. The MoU was signed on 17th of September 2024 at an official signing ceremony at Våbenfabrikken’s premises in Denmark. prezly_689548_thumbnail.jpg Hide from search engines Off Prezly ID 689548 Prezly UUID 28b79d9e-1f6d-4ce5-9704-418adfe7adcc Prezly url https://thales-group.prezly.com/thales-australias-lithgow-arms-partners-with-vabenfabrikken-to-establish-danish-small-arms-industrial-capability Tue, 09/17/2024 - 14:00 Don’t overwrite with Prezly data Off

KuppingerCole

Offensive Security: Identifying Vulnerabilities Before Attackers Do

by Syed Ubaid Ali Jafri As cyber threats become increasingly sophisticated, organizations must evolve their defense strategies to stay protected. Offensive security, which focuses on identifying and mitigating vulnerabilities before attackers can exploit them, is a crucial aspect of modern cybersecurity. At cyberevolution 2024, Syed Jafri, Head of Cyber Defense & Offensive Security at H

by Syed Ubaid Ali Jafri

As cyber threats become increasingly sophisticated, organizations must evolve their defense strategies to stay protected. Offensive security, which focuses on identifying and mitigating vulnerabilities before attackers can exploit them, is a crucial aspect of modern cybersecurity.
At cyberevolution 2024, Syed Jafri, Head of Cyber Defense & Offensive Security at Habib Bank Limited (HBL), will address these challenges. His expertise in offensive security practices and threat intelligence offers valuable insights for those looking to enhance their organization's defense mechanisms.


AI in Cybersecurity: Risks and Opportunities

by Alexei Balaganski AI is often hailed as the ultimate tool for addressing cybersecurity challenges, but what happens when hype collides with reality? The meteoric rise of generative AI has captured the imagination of the public. From writing essays to producing art, AI can seemingly do anything. But can it really tackle the complex issues of cybersecurity effectively? Let’s start with the ele

by Alexei Balaganski

AI is often hailed as the ultimate tool for addressing cybersecurity challenges, but what happens when hype collides with reality? The meteoric rise of generative AI has captured the imagination of the public. From writing essays to producing art, AI can seemingly do anything. But can it really tackle the complex issues of cybersecurity effectively?

Let’s start with the elephant in the room: ChatGPT is not the pinnacle of artificial intelligence that many believe it to be. In fact, what we often mistake for the GenAI model’s competence is just its astonishing ability to instantly generate a response that sounds coherent and plausible, courtesy of billions of digital monkeys with typewriters.

Unfortunately, what these monkeys are still lacking is the honesty to admit that they don’t know something. Instead, they will happily generate pages of plausibly sounding nonsense (in the industry, this is politely referred to as “hallucinations”). To quote an article I read recently: “For decades, we were promised artificial intelligence. What we got instead is artificial mediocrity.”

Beyond the Hype: The Limits of Large Language Models in Cybersecurity

While ChatGPT may seem like an all-powerful assistant, it is not designed for or particularly good at many of the tasks necessary in cybersecurity. Large language models can write code, analyze texts, and even assist in decision-making, but their potential applications in a high-stakes field like cybersecurity must be approached with careful consideration.

Generative AI thrives on massive datasets. But in cybersecurity, those datasets often contain sensitive, confidential information that you would rather not share with an external model housed in a cloud data center. Add to that the huge computational overhead that these models require, and we are left with an unsustainable approach in the long term. Imagine the environmental costs: running LLMs with cutting-edge encryption, like fully homomorphic encryption, would take us closer to a climate catastrophe than Bitcoin mining ever did.

So, does this mean AI has no role in cybersecurity? Absolutely not. But we need to distinguish between what is hype and what is practical, scalable, and trustworthy.

Practical AI Use Cases in Cybersecurity: What Really Works

Long before ChatGPT was even a concept, machine learning (ML) techniques were already a staple in cybersecurity tools. From anomaly detection to behavioral analytics, AI-driven methods have long been applied to analyze large datasets and identify outliers that might signify a security breach.

The technology behind detecting anomalies, for instance, has been around for decades, well before the GenAI boom. It’s based on statistical methods that have been refined over the years. But here’s where things get tricky - detecting an anomaly is one thing, but determining whether that anomaly poses a real threat is quite another. With traditional methods, you may end up with a flood of anomalies, but with no real insight into which of them demand immediate action.

The most advanced AI/ML tools today do more than just identify anomalies. They correlate them with known attack vectors, connect them to a specific threat framework like MITRE ATT&CK®, and even provide detailed threat artifacts that can be used for further analysis. The real challenge is not in detection, but in correlation, for example, in figuring out which vulnerabilities are actually exploitable in your specific environment. All of this makes for a robust threat detection mechanism, but none of it requires the power of generative AI.

Behavioral Analytics: The Long Game in Cybersecurity

Another area where AI/ML shines is in behavioral analytics - tracking user and system behavior over extended periods to identify potential security risks. But again, this is not the domain of ChatGPT. Traditional ML methods are more than capable of profiling behaviors, identifying deviations from the norm, and flagging potential threats based on those deviations.

The challenge in behavioral analytics is not the technology itself – it is the data. To be effective, behavioral AI tools need access to large, diverse datasets. This is why the most effective solutions come from vendors who operate massive security clouds, collecting behavioral data from a wide range of users, systems, and geographies.

What’s key to understand here is that this method requires continuous learning over time. Unlike the hype around instant results from LLMs, behavioral analytics relies on consistent, long-term data collection to provide meaningful insights.

Threat Intelligence: Where an LLM Can Truly Make a Difference

Knowing your enemy is a major factor in any kind of warfare, not just in cybersecurity. However, in cybersecurity, this struggle is especially unfair – thousands if not millions of malicious actors are out there against us, and somehow, we must collect enough intelligence about them to understand their methods, techniques, and motives.

Unsurprisingly, the Threat Intelligence industry is growing rapidly - both cybersecurity vendors and customers are in constant need of every bit of information that can give them an advantage in defending against the next cyberattack. Unfortunately, a lot of this information is highly unstructured and difficult to quantify. Entire teams of security researchers spend their days trawling the dark web for bits of intelligence about malicious actors.

Natural language processing capabilities of LLMs can dramatically increase their productivity. These AI models can directly interpret textual data like threat reports, social media, and forum posts to assess emerging risks, correlate them with data from different sources, and thus provide up-to-date insights into global cyber threats.

Can AI Handle Automated Incident Response?

One of the most controversial promises of AI in cybersecurity is the potential for automated incident response. In theory, AI could detect a threat and neutralize it without human intervention. In practice, though, there’s a significant trust gap. Many companies remain wary of handing over control of their incident response processes to an AI, no matter how advanced. A poorly designed AI could do more harm than good: imagine it shutting down critical manufacturing systems because it misinterpreted a benign anomaly as a serious threat.

However, we are seeing a shift in attitudes. The explosion of ChatGPT’s popularity has made organizations more open to the idea of AI taking on more responsibility in their security operations. But it’s a gradual process. Many companies are opting for a phased approach, first using AI in a “dry run” mode, where it identifies threats but does not take action. Only after extensive testing do they move to a more automated setup.

But even with this cautious approach, the question remains: should we trust AI to make these decisions for us? In most cases, the answer is still no; at least, not without significant oversight from human operators.

Finding the Balance Between Technology, Risk, and Trust

AI undoubtedly has a role to play in the future of cybersecurity, but we need to keep our expectations grounded in reality. Generative AI is not the silver bullet that many make it out to be - it’s useful in specific contexts, but far from a game-changer in cybersecurity. Instead, we should focus on leveraging the right kind of AI for the right tasks.

As with any emerging technology, trust is earned, not given. In cybersecurity, where the stakes are high, it’s crucial to proceed with caution, ensuring that AI is used to complement human expertise rather than replace it. After all, AI may help us detect threats faster, but it’s human judgment that ultimately keeps our systems safe.

If you’re interested in learning more about AI applications from real human experts, you might consider attending the upcoming cyberevolution conference that will take place this December in Frankfurt, Germany. AI risks and opportunities will be one of the key topics discussed there.


Ontology

Inland Revenue’s Data Breach and Why Web3 Security Needs Decentralized Identity

The recent Inland Revenue data breach serves as a stark reminder of the fragility of centralized systems. When large organizations — whether they be governments, corporations, or tech giants — are responsible for housing vast amounts of sensitive data, a single error can have catastrophic consequences. In this case, it’s tax information. But the implications go much deeper. We’ve seen time a

The recent Inland Revenue data breach serves as a stark reminder of the fragility of centralized systems. When large organizations — whether they be governments, corporations, or tech giants — are responsible for housing vast amounts of sensitive data, a single error can have catastrophic consequences. In this case, it’s tax information. But the implications go much deeper.

We’ve seen time and again how centralized structures, a hallmark of Web2, fail to protect data adequately. Whether through technical vulnerabilities or human error, the result is the same — your personal information is left exposed. This isn’t just about tax records, passwords, or email addresses getting into the wrong hands. It’s about trust. And when that trust is broken, it takes years to rebuild, and we’ve all become painfully aware of how fragile that trust is in today’s digital age.

This is where decentralized identity (DID) comes in. DID flips the script, handing control back to individuals rather than institutions that often mismanage data. With decentralized identity systems, your personal information is no longer stored in a vulnerable central server; it’s distributed across a secure, immutable blockchain. You decide who gets access to your data and under what terms. You own it, you control it, and you can revoke access whenever you want.

Web3 security technologies like Zero Knowledge Proofs, Self-Sovereign Identity, and decentralized storage solutions enable this shift. Instead of depending on a tax department or a tech giant to safeguard your data, you control every aspect of its distribution. Inland Revenue’s mishap should be a wake-up call, a signal that centralized systems are not built for the digital age we now inhabit. The centralized Web2 world is riddled with single points of failure, and as we become more reliant on digital systems, these failures become not just likely but inevitable.In contrast, decentralized systems are trustless by design. You don’t need to trust an organization or a government to protect your data because the system itself is built on cryptographic proofs that ensure privacy and security. It’s about data sovereignty — taking back control over the very information that defines us.

Inland Revenue’s slip-up highlights a deeper truth: centralized data management is outdated and dangerous. The promise of Web3 is a system where users are empowered, not at the mercy of flawed institutions. This isn’t just an evolution in technology; it’s a fundamental shift in how we interact with and protect our personal information. The time has come to embrace decentralized systems, where security, privacy, and control are no longer luxuries but basic rights.Are we ready to leave behind the vulnerabilities of Web2? The Inland Revenue incident suggests we don’t have much of a choice.

Interested in learning more about decentralized identities? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Inland Revenue’s Data Breach and Why Web3 Security Needs Decentralized Identity was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Nov 12, 2024: From Building Application Resilience Amidst Regulatory Shifts

In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critical
In today’s fast-changing regulatory landscape, businesses must not only meet compliance standards but also ensure their applications are resilient against cyber threats. As regulations tighten and the risk environment evolves, organizations face growing pressure to safeguard their applications while staying compliant. The need to balance security with legal requirements has never been more critical for IT professionals.

Thales Group

Dr Aaron Roberts Wins Prestigious H Rowbotham Memorial Award for Human Factors Excellence in Submarine Domain

Dr Aaron Roberts Wins Prestigious H Rowbotham Memorial Award for Human Factors Excellence in Submarine Domain Language English simon.mcsstudio Tue, 09/17/2024 - 09:30 Dr Aaron Roberts Dr Aaron Roberts, Senior Operability & Human Factors Specialist at Thales in the UK, has achieved a significant milestone by being awarded the H
Dr Aaron Roberts Wins Prestigious H Rowbotham Memorial Award for Human Factors Excellence in Submarine Domain Language English simon.mcsstudio Tue, 09/17/2024 - 09:30

Dr Aaron Roberts

Dr Aaron Roberts, Senior Operability & Human Factors Specialist at Thales in the UK, has achieved a significant milestone by being awarded the H Rowbotham Memorial Award from the UK Ministry of Defence (MoD). Aaron is recognised as one of the youngest-ever recipients and the first to be honoured from the submarine domain. Aaron’s recognition highlights his ground breaking work in human factors, an essential field for improving safety, efficiency, and decision-making in high-stakes defence environments.

Recognised by the Defence Science and Technology Laboratory

Aaron’s nomination for this prestigious award came from Dr Ben Evans, a senior figure at the Defence Science and Technology Laboratory (Dstl). This endorsement not only reflects Aaron’s individual excellence, but also demonstrates the strength of Thales’ collaboration with its customers. Through his work, Aaron has contributed significantly to improving human-technology interaction in the submarine sector, setting new standards for operational effectiveness in this challenging environment.

The Role of Human Factors in Submarine Operations

Human factors are essential in the development of defence technologies, particularly in the submarine domain, where human-technology interaction can make the difference between mission success and failure. Aaron’s research has focused on refining interface design, improving situational awareness, and reducing the cognitive burden on operators, ensuring that submariners can operate with precision and confidence even in the most stressful conditions.

A Commitment to Operational Excellence at Thales UK

Thales’ emphasis on human factors is a key part of its mission to deliver cutting-edge capabilities that support the UK’s national security. The company works closely with customers like the MoD to ensure that its solutions not only meet technical requirements, but also integrate seamlessly with human operators. By focusing on the real-world application of its technologies, Thales ensures that operators at the sharp end of defence operations have the tools they need to succeed.

Fostering Talent and Innovation

Dr Roberts’ award is not just a recognition of his outstanding work, but also reflects Thales’ commitment to nurturing talent and fostering innovation. The company’s inclusive and supportive work culture encourages continuous learning and development, helping employees like Aaron push the boundaries of what is possible in the defence sector.
 

Astute Class Submarine image (top) © Crown copyright 2024

/sites/default/files/database/assets/images/2024-09/_HMS-Ambush-Astute-Class-Crown-Copyright-Banner.jpg 17 Sep 2024 United Kingdom Dr Aaron Roberts, Senior Operability & Human Factors Specialist at Thales in the UK, has achieved a significant milestone by being awarded the H Rowbotham Memorial Award from the UK Ministry of Defence… Type News Hide from search engines Off

Monday, 16. September 2024

Thales Group

Guillermo Roselló Massa, New General Manager of the Defence Area at Thales

Guillermo Roselló Massa, New General Manager of the Defence Area at Thales Language English omnia.anis Mon, 09/16/2024 - 17:09 Guillermo Roselló joins Thales to lead the defence area of the technology multinational in Spain.  Roselló will take over the general management of the Spanish subsidiary, replacing José Sarnito, who leaves th
Guillermo Roselló Massa, New General Manager of the Defence Area at Thales Language English omnia.anis Mon, 09/16/2024 - 17:09

Guillermo Roselló joins Thales to lead the defence area of the technology multinational in Spain. 

Roselló will take over the general management of the Spanish subsidiary, replacing José Sarnito, who leaves the position after more than 15 years at the head of the defence area in Spain.

Roselló has extensive experience in the aerospace sector, especially in the management of large international projects. His experience in the Public Administration and in multilateral organisations such as NATO, together with his military background, gives him an in-depth knowledge of all the stakeholders in the defence sector.

With this incorporation, the Thales Group advances in its commitment to Spain, where it expects to grow in the field of defence and security, cybersecurity and digital security in the coming years. 

About Thales 

Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity.
It develops products and solutions that help make the world safer, greener and more inclusive.

The Group invests close to €4 billion a year in Research & Development, particularly in key innovation areas such as AI, cybersecurity, quantum technologies, cloud technologies and 6G.

Thales has close to 81,000 employees in 68 countries. In 2023, the Group generated sales of €18.4 billion.

/sites/default/files/database/assets/images/2024-09/corporate-announcements_021_800x450.jpg 16 Sep 2024 Spain Type News Hide from search engines Off

1Kosmos BlockID

Streamlining Self-Service User Onboarding with 1Kosmos MFA Integration

In today’s fast-paced digital world, efficient and secure user onboarding is a crucial aspect of any organization’s IT strategy. Imagine users maintaining countless account credentials to log in to their office productivity tools suite. Sounds cumbersome, right? 1Kosmos as a multi-factor authentication partner is here to solve that very issue.   Microsoft, a leader in … Continued The post S

In today’s fast-paced digital world, efficient and secure user onboarding is a crucial aspect of any organization’s IT strategy. Imagine users maintaining countless account credentials to log in to their office productivity tools suite. Sounds cumbersome, right? 1Kosmos as a multi-factor authentication partner is here to solve that very issue.  

Microsoft, a leader in productivity and cloud solutions, has partnered with 1Kosmos to enhance user onboarding through the integration of 1Kosmos. This collaboration offers a streamlined and secure self-service onboarding process, addressing key concerns of identity verification and user experience. 

Self-service onboarding enables users to onboard and enroll their identity independently, reducing the burden on IT and HR teams and improving operational efficiency.  

Traditional onboarding processes often involve multiple steps and require substantial input from users, often leading to significant administrative overhead and potential delays. This conventional approach typically includes a series of manual tasks such as creating user accounts, assigning permissions, configuring access to various systems, and ensuring compliance with security protocols. Each of these steps demands careful attention and coordination from IT personnel to ensure that new hires are properly integrated into the company’s IT infrastructure. This can be time-consuming and prone to human error, especially in large organizations with complex IT environments. 

In contrast, Microsoft’s self-service onboarding solution streamlines this process by allowing new employees to handle many of these tasks independently through a user-friendly interface. This modern approach not only reduces the workload of IT staff but also accelerates the onboarding timeline, enabling new hires to get up and running more quickly. By automating routine tasks and providing a seamless, guided experience for users, self-service onboarding enhances operational efficiency and ensures a more consistent and error-free setup process. 

How 1Kosmos Enhances Self-Service Onboarding: 

1Kosmos provides a robust identity verification solution that enhances the self-service onboarding process. The key benefits of integrating 1Kosmos in Microsoft’s self-service onboarding include: 

Improved Security: 1Kosmos offers sophisticated identity verification through biometric and blockchain-based technology. This ensures that the users being onboarded are legitimate and significantly reduces the risk of identity fraud and unauthorized access.  Enhanced User Experience: Traditional MFA methods can be cumbersome, requiring users to remember and manage multiple credentials. 1Kosmos simplifies this by using biometric data and blockchain technology, which are not only more secure but also more convenient for users.  Streamlined Processes: With 1Kosmos integrated into Microsoft’s onboarding framework, the process becomes more intuitive. Users can complete the verification process quickly using their biometric data, which reduces friction and accelerates the overall onboarding timeline.  Reduced IT Workload: By automating and securing the identity verification process, 1Kosmos reduces the need for IT intervention in the onboarding process. This allows IT teams to focus on more strategic tasks rather than managing routine account setup and security issues. 

1Kosmos seamlessly integrates with Microsoft’s product suites where Microsoft’s suite of productivity tools and cloud solutions benefits greatly from the integration of 1Kosmos This integration ensures that users accessing Microsoft applications, such as Office 365 or Azure, are securely verified through advanced authentication methods. As a result, organizations can maintain a high level of security while offering a user-friendly experience. 

The partnership between Microsoft and 1Kosmos represents a significant advancement in self-service user onboarding. By incorporating 1Kosmos as an MFA factor, Microsoft enhances the security and efficiency of the onboarding process, benefiting both users and IT teams. As organizations continue to prioritize digital transformation, adopting such innovative solutions will be essential for maintaining a secure and efficient IT environment. 

Integrating 1Kosmos with Microsoft’s self-service onboarding process not only enhances security but also improves the overall user experience, setting a new standard for efficient and secure account management. 

The post Streamlining Self-Service User Onboarding with 1Kosmos MFA Integration appeared first on 1Kosmos.


Trinsic Podcast: Future of ID

Calvin Fabre - Envoc's Role in Pioneering Mobile Driver’s Licenses in Louisiana

In this episode, I’m joined by Calvin Fabre, President and Founder of Envoc, a company that has been at the heart of mobile driver's license (mDL) innovation in Louisiana, a state leading the nation in mDL adoption. Calvin shares the fascinating story of how his company helped bring the country’s first digital driver’s license into reality, starting with a simple idea for a “digital glove box.” W

In this episode, I’m joined by Calvin Fabre, President and Founder of Envoc, a company that has been at the heart of mobile driver's license (mDL) innovation in Louisiana, a state leading the nation in mDL adoption. Calvin shares the fascinating story of how his company helped bring the country’s first digital driver’s license into reality, starting with a simple idea for a “digital glove box.”

We dive into a variety of topics, including:

- The journey from bidding on payment processing systems to developing a groundbreaking MDL system for the Louisiana DMV
- How Envoc navigated the complexities of legislation and law enforcement adoption to make digital driver's licenses legal for routine traffic stops
- The importance of user feedback in expanding the LA Wallet app to include hunting licenses, concealed carry permits, and even COVID-19 vaccine cards
- The unique role LA Wallet has played in verifying identity remotely, including for disaster relief and online age verification for adult content
- Insights on the future of digital credentials, from frictionless onboarding to the growing adoption of MDLs in industries like banking and retail

Calvin’s expertise offers a deep dive into the future of identity and digital credentials, making this episode a must-listen for anyone interested in the intersection of technology, law enforcement, and secure digital identification.

You can learn more about Envoc at envoc.com.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


Caribou Digital

Breaking down power imbalances through co-creation

Written by Chelsea Horváth, Measurement & Impact Manager, and Grace Natabaalo, Research & Insights Manager, both at Caribou Digital. Co-creation has become an increasingly important topic and practice within the research, evaluation, and development communities. Like many others in our community of practice, at Caribou Digital, we’re reflecting on co-creation in our work. At first g

Written by Chelsea Horváth, Measurement & Impact Manager, and Grace Natabaalo, Research & Insights Manager, both at Caribou Digital.

Co-creation has become an increasingly important topic and practice within the research, evaluation, and development communities.

Like many others in our community of practice, at Caribou Digital, we’re reflecting on co-creation in our work. At first glance, co-creation seems simple enough — create something with others.

But when the rubber hits the road, sticky questions arise. Who needs to be involved? What information is shared and how? How much time and resources are required to co-create? How is consensus reached? Who makes the final decision? Through trial and error and learning from others in the field, we’d like to share our experience and lessons on co-creation within research.

Caribou Digital’s approach to co-creation

At Caribou Digital, we understand co-creation to be an “approach that brings people together to collectively produce a mutually valued outcome and that involves a participatory process assuming some degree of shared power and decision-making.”

At conferences and in requests for proposals, we often see that co-creation is confused with collaboration (see the table below created by the authors).

The key differences between the two can be found in the definition above: breaking down power structures and decision-making. Without time and resources dedicated to those aspects, attempts at co-creation become more like collaboration.

A table outlining the differences between consultation, collaboration, and co-creation. Using co-creation to center young people as experts in their own digital futures

In partnership with the Mastercard Foundation, Caribou Digital researched young people’s experiences with digital technologies in Africa, selecting 20 young people from across seven countries to co-create with. They included young people whose stories are not often seen or heard, such as women, people living with disabilities, refugees, and those living in rural areas.

The research team recognized that, despite good intentions, power imbalances would exist among the young people, the Mastercard Foundation, and Caribou Digital. These would hinder important insights that could lead to more strategic and relevant recommendations.

From the outset, we created an environment to alleviate these power imbalances. The co-creation process involved treating the young people as experts whose stories shaped the report, emphasizing collaboration and flexibility. This approach was outlined in the Terms of Reference, which each young person signed at the beginning of the project. At the first video conferencing session, expectations were aligned and rules of engagement were set. The young people reviewed and provided feedback on the research coding framework, shaping the language and direction of the project. Video conferencing sessions to share experiences were made inclusive and accessible, with flexible post-session reflection assignments to accommodate all needs. During the report-writing phase, panelists reviewed drafts, edited their quotes, and provided feedback, culminating in a discussion on how best to present the final report.

In reflecting on our co-creation process, three core learnings emerged.

Lesson #1: Storytelling and reflection assignments yield richer data in a non-extractive way.

Rather than extract young people’s experiences through various data collection methods, we used storytelling and reflection assignments to co-create this research. From the beginning, Caribou Digital emphasized that the young people were the experts. Their stories were the foundation of the report; our role was to facilitate and listen. The online video conference format allowed the young people to build on one another’s experiences, feel validated, and connect in a non-extractive process. Post-session reflection assignments (for example, asking the young people to reflect on how digital technologies have impacted their choice and agency) allowed them to reflect on their own and in a convenient mode (audio message or email). Providing feedback on the research process, one young person shared, “The room was always accommodating of all of us who wanted to speak, and the moderators were tolerant of our views. I felt [at] home to speak/write from the reality of my experience.”

Lesson #2: Double the time and resources needed for co-creation.

Co-creation required more time, planning, and resources than initially thought. Every video conference session required thoughtful preparation to ensure a welcoming and inclusive environment — from the slide deck to the video captions. Reflection assignments and video recordings were analyzed carefully to ensure they accurately represented the young people’s experiences. Extra time was needed for the young people to review report drafts, edit quotes, and expand on their experiences. A safe estimate for others looking to use this co-creation approach would be to double the time and human resources needed.

Lesson #3: Accountability, transparency, and flexibility are key co-creation ingredients.

It was important for Caribou Digital to develop a trusted working relationship with the young people to keep them engaged throughout the research process. We were accountable when things weren’t working well and shared how the young people’s feedback was incorporated into the report. We were transparent with expectations for the research and when honorarium payments were delayed. We were flexible when the young people couldn’t provide feedback on time or attend a video conference session due to busy schedules. These practices kept the young people engaged throughout the research process. When asked to provide anonymous feedback on the research process, one participant shared, “[Caribou] was always in touch both in the Zoom session and WhatsApp to guide in case anything wasn’t right. […] We also had timely reminders for the meetings, and at no point was I caught offside or unaware of a meeting.”

Catalyzing research with co-creation

When done well, co-creation is an incredibly powerful practice that can elevate and amplify marginalized voices and improve the quality of research products. Our co-creation journey with these 20 young people was enriching and insightful, underscoring the value of trust and transparency.

By prioritizing youth voices and experiences, the 20 young people, Caribou Digital, and the Mastercard Foundation crafted a powerful report that reflects young people’s perspectives and experiences on digital technologies in Africa. One young person shared, “I feel like [co-creation] is a good approach because it lends to the authenticity of the report since these are our lived experiences […] It also makes the report relatable to fellow youth especially.”

Caribou Digital is committed to continuing this approach and conducting more co-created research. If you’re interested in participating in such initiatives or have ideas for collaboration, we invite you to connect with us at chelsea@cariboudigital.net.

Breaking down power imbalances through co-creation was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


HYPR

What Is Phishing-Resistant MFA and How Does it Work?

Phishing, despite its somewhat innocuous name, remains one of the foremost security threats facing businesses today. Improved awareness by the public and controls such as multi-factor authentication (MFA) have failed to stem the tide. The FBI Internet Crime Report puts phishing and its variants (spear phishing, smishing, vishing) as the top cybercrime for the last five years, and the

Phishing, despite its somewhat innocuous name, remains one of the foremost security threats facing businesses today. Improved awareness by the public and controls such as multi-factor authentication (MFA) have failed to stem the tide.

The FBI Internet Crime Report puts phishing and its variants (spear phishing, smishing, vishing) as the top cybercrime for the last five years, and the advent of generative AI has only added fuel to the fire. Using ChatGPT and other tools, hackers can quickly create personalized messages, in local languages, to launch widespread, highly effective phishing campaigns.

In the last six months alone, malicious emails have increased by 341%, prompting industry experts to urge organizations of all sizes to implement phishing-resistant MFA.

So, what is phishing-resistant MFA and how does it differ from traditional MFA? In this article, find phishing-resistant definitions and use cases, and learn why it’s the safest option for organizations.

What is Phishing?

Phishing is a method of attack used by malicious actors that involves deceiving users into installing malware or revealing sensitive information such as passwords, payment card and social security numbers. With this information they can take over accounts, sell the information on the dark web, steal identities and even access internal systems and networks of an organization. 

Common phishing attacks include:

Email phishing: Attackers send emails, typically with malicious links or attachments that steal sensitive data from users.  Whale and spear phishing: Similar to email phishing, whale and spear phishing are more targeted and aimed at specific, typically high-profile people in the organization (e.g. CEO or other executive).  Smishing and Vishing (voice phishing): Smishing uses SMS messages while vishing uses either a mobile or landline, combining it with social engineering attacks.   Domain phishing/impersonation: Attackers typically pretend to be well-established brands to gain users’ trust and divulge sensitive information.  Malicious attachments: Attachments contain malware that infect systems and can trigger ransomware or other attacks that steal sensitive data.  What is Multi-Factor Authentication?

Multi-factor authentication requires at least two independent factors, knowledge, or something you know (e.g., password, PIN, security question), possession, or something you have (e.g., OTP code, device), and inherence, or something you are (e.g., fingerprint or other biometric marker). 

It is different from two-factor authentication (2FA) in that 2FA requires an additional verification besides your username and password, but it doesn’t require it to be from a different authentication category like with MFA.

Phishing-Resistant MFA Overview

Phishing-resistant authentication does not use shared secrets at any point in the login process, eliminating the attacker's ability to intercept and replay access credentials and hardening the authentication process so that it cannot be compromised by even the most sophisticated phishing attacks. Passwordless MFA based on FIDO standards is considered the gold standard for phishing-resistant authentication by the OMB and other bodies.

Phishing-resistant MFA is based on public/private key cryptography and follows the guidelines published by the OMB in its M-22-09 Federal Zero Trust Strategy memorandum and the requirements for “verifier impersonation resistance” outlined by the National Institute of Standards and Technology (NIST) in SP 800-63-3.  

The Problem With Traditional MFA

There are two different problems when it comes to traditional MFA. The first is that it causes friction, both for employees who use it to access accounts and consumers who want to make their purchases quickly. 

The second problem is a security issue. Unfortunately, the most common second factor in traditional MFA is “something you have” in the form of an SMS or OTP. Like passwords, these verification methods are highly vulnerable to phishing as well as MitM (Man-in-the-Middle) attacks. In order for MFA to resist phishing, it cannot rely on the use of SMS, OTPs, or identification attempts through voice calls or interceptable push notifications.

Why Phishing-Resistant MFA is the Gold Standard

A better solution is FIDO or PKI-based passwordless authentication. These phishing-resistant MFA methods remove the vulnerabilities that undermine traditional MFA, including any use of a “something you know”’ factor as these are the target of the majority of phishing attacks.

Phishing-resistant MFA does not use any of these weaker authentication factors. It uses a strong possession factor in the form of a private cryptographic key (embedded at the hardware level in a user-owned device) and strong user inherence factors such as touch or facial recognition. Equally important, the backend authentication process does not require or store a shared secret.

Since 2022, CISA, the Cybersecurity and Infrastructure Security Agency, has strongly recommended that all organizations implement phishing-resistant MFA based on FIDO standards. This is considered the gold standard for phishing-resistant authentication by NIST (800-63B), the FFIEC, the OMB and other cybersecurity statutes.

Phishing-resistant MFA flow

Breaking Down Phishing-Resistant Multi-Factor Authentication

Phishing-resistant multi-factor authentication defends against attackers who are looking to bypass authentication controls. This more advanced level of security involves various technologies and processes, which can be implemented in a number of ways.

Strong Authentication

A hallmark of phishing-resistant MFA is strong authentication that provides a robust defense against phishing and other targeted attacks. A somewhat broad concept, it involves using secure cryptographic protocols and two or more authenticating factors that include proof of device possession as well as user biometrics.

Passkeys

Passkeys replace passwords and secrets with cryptographic key pairs and on-device biometrics for faster, easier, and more secure sign-ins to websites and apps. Unlike passwords, passkeys are always strong and phishing-resistant. Passkeys can be either synced or device-bound. Synced passkeys are the standard passkeys offered by Apple, Microsoft, Google and others.

The private key is securely stored in a vault, such as the OS keychain or a password manager, and can be synced between devices. Device-bound passkeys, by contrast, are stored on a specific hardware device and cannot be shared with other devices.

Security Keys 

Security keys are physical devices that store cryptographic keys, but they can be either hardware or software-based. Software-based keys might be stored and integrated into mobile devices, for example, whereas hardware keys are physical devices that store cryptographic keys. However, this method has limitations as it can easily be lost or stolen and challenging to recover.

Biometric Authentication

Biometric authentication focuses on biological methods of identification such as fingerprints or face recognition to verify identity for the inherence (e.g. “something you are”) authentication factor. It is often integrated into devices such as mobile phones or computers. 

Adaptive Authentication 

While not technically an element of phishing-resistant MFA, adaptive authentication enforces verification of identity based on the user’s context and risk. For example, it would have a different process based on the user’s location (e.g. home or work) and device (e.g. phone or work computer). 

The Cost of Phishing Attacks

Phishing plays a role in various types of attacks. According to the 2023 Verizon Data Breach Investigations Report, phishing accounted for 44% of social engineering breaches, with the median amount stolen from Business Email Compromise alone averaging $50,000. It’s also a key initial attack vector in credential stealing, allowing hackers to initiate fraudulent transactions, deliver malware including infostealers and ransomware and gain an authenticated foothold from which they can move laterally within the system.

The Cost of a Data Breach 2024 report by IBM estimates that the average cost of a data breach is $4.88 million, an increase of 10% from the year before. Unfortunately, the go-to mitigation to prevent phishing, namely adding traditional MFA, has proven inadequate. Sometimes they are even used as part of the attack itself. 

Most multi-factor authentication solutions feature a password as one of the verification factors. The additional authentication factor generally is a one-time password (OTP) sent by voice, SMS, or email, or a push notification via an authenticator app that the user must accept.

Today, automated phishing kits that can circumvent these methods are readily available to hackers. Cybersecurity experts claim that over 90% of all multi-factor authentication is phishable. Due to these MFA vulnerabilities and the threat posed by phishing, the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Government Office of Management and Budget (OMB), as mentioned above, have specifically called for phishing-resistant MFA. 

Why Organizations Need to Prioritize Phish-Resistant Authentication

While the need for phishing-resistant MFA has been apparent for some time, and was a key driver for establishing the FIDO Alliance, the generative AI trend and ChatGPT in particular has kicked this into overdrive. Cybercriminals now have the ability to send massive numbers of highly targeted phishing attacks using dark web ChatGPT counterparts such as FraudGPT and WormGPT.

According to Slashnext’s State of Phishing 2024 Mid-Year Assessment, there has been a 4151% increase in malicious emails since the advent of ChatGPT in late 2022. 

As phishing attacks have increased, so has the incidence of account takeover (ATO),  leading to a number of potential consequences for targeted organizations, including supply chain fraud, data theft and the installation of ransomware and other malware. Attackers can also use the hijacked account of one user to escalate attacks within the organization by sending malicious emails from a trusted user.

Multi-factor authentication has proven ineffective against modern phishing campaigns, which are able to phish both the initial login credentials and the second factor. For example, a phishing message might direct the victim to a proxy website while the attacker acts as a man-in-the-middle to steal both the password and OTP code.

This is only one of many tactics cybercriminals use to compromise multi-factor authentication that uses OTPs or SMS. Others include running legitimate versions of websites on their own servers, using robocalls to convince users to hand over codes and SIM-swapping, so messages are sent to an attacker’s phone.

The skyrocketing number of phishing attacks in general, accompanied by sophisticated tactics that can circumvent common authentication checks, means that phishing-resistant MFA is no longer optional. Instead, it is the only choice to keep employees and organizations safe from the vast majority of phishing threats.

How to Choose a Phishing-Resistant MFA Solution

When considering a phishing-resistant MFA solution, you’ll want to ask about its ability to completely remove shared secrets (passwords, OTPs), its support for multiple devices (e.g. desktop and mobile), and its ability to reduce friction for the user experience.

For example, does it secure authentication for remote workers and work offline? Is it intuitive and easy for new users to learn? You’ll also want to verify how long it takes to deploy across your organization and if it integrates with major identity providers (IdPs). Finally, you’ll want to make sure its FIDO Certified and achieves compliance with Zero Trust architecture and regulatory obligations. 

Considerations When Implementing Multi-Factor Authentication

Implementing multi-factor authentication within your organization involves a few different factors to evaluate:

Security strength: Although MFA typically protects against brute force attacks, some types of authentication are subject to phishing attacks. To ensure the highest level of security, however, you’ll want to consider phishing-resistant MFA that is FIDO-compliant. Cost: You’ll need to evaluate the costs of the solution, which include not only setup and user training but ongoing maintenance costs. Keep in mind that while some solutions might cost more, they may also deliver better security and be easier for your team to implement. Some solutions may also impact productivity at the time of deployment, so that might be a consideration. Flexibility: Users want a number of different options available for MFA. Check that your solutions offer different methods of authentication, such as verification via a mobile application or hardware keys to adjust to the needs of different users and environments. Scalability: Can the solution adapt to the changing needs of your organization? Can it handle a workforce that is remote? Does it offer MFA for networks, servers, and cloud infrastructure? 

Learn how to evaluate passwordless security solutions

HYPR's Phishing-Resistant MFA Solution

It’s clear that phishing-resistant MFA is critical, but what does it look like in practice? HYPR’s Passwordless MFA solution is based on the FIDO standards and provides phishing-resistant authentication from desktop through to cloud applications, no matter where your workforce is located.

HYPR leverages public key cryptography to allow for secure authentication that fully eliminates the use of shared secrets between parties. Just as importantly, the HYPR platform is easy to deploy and makes logins fast and easy for the user. Complicated sign-in processes are one of the biggest reasons that people take shortcuts or use unsafe practices that criminals exploit. 

To learn more about passwordless security and phishing-resistant MFA, read our Passwordless 101 guide.

FAQs

What is the difference between passwordless and phishing resistant MFA?
Not all passwordless MFA is phishing-resistant or indeed really passwordless. OTP codes, after all, are a form of password. A solution that uses any kind of shared secret can still be compromised by phishing, man-in-the-middle and other attacks that target credentials. Phishing-resistant MFA, on the other hand, ensures that even if users are targeted with phishing attacks, there are no credentials available to steal and their authentication remains secure.

What are the benefits of phishing resistant MFA?
Phishing-resistant MFA delivers a number of benefits to the user. First, it delivers a friendly user experience that eliminates the friction involved in the traditional MFA process. Second, it provides a higher level of security than two-factor authentication or traditional multi-factor authentication. 

Can phishing bypass 2FA?
Yes, phishing can bypass 2FA using a number of different methods such as man-in-the-middle attacks, password resets and social engineering attacks. This is because most 2FA verification methods involve one-time passwords (OTP) via email or SMS, which can be easily intercepted.

Why are passkeys phishing resistant?
Passkeys are phishing resistant as they are based on FIDO standards which were designed to resist phishing as well as some other forms of attack. They consist of cryptographic key pairs, which are registered to a specific authenticating service, ensuring that the passkey only works with the exact domain name of the service. There are no passwords or shared credentials to phish and a spoofed site cannot use them.

Editor's Note: This blog was originally published May 2022 and has been completely revamped and updated for accuracy and comprehensiveness.


KuppingerCole

Evidian Orbion IDaaS solution

by Martin Kuppinger This KuppingerCole Executive View report examines Evidian Orbion, the next-generation IDaaS solution from Evidian. Orbion provides a comprehensive, integrated approach to Identity as a Service (IDaaS), addressing all major areas of Identity and Access Management (IAM) beyond just the workforce. This report includes a technical review of the solution Evidian Orbion.

by Martin Kuppinger

This KuppingerCole Executive View report examines Evidian Orbion, the next-generation IDaaS solution from Evidian. Orbion provides a comprehensive, integrated approach to Identity as a Service (IDaaS), addressing all major areas of Identity and Access Management (IAM) beyond just the workforce. This report includes a technical review of the solution Evidian Orbion.

Microsoft Entra ID Governance

by Martin Kuppinger This KuppingerCole Executive View report looks at Microsoft Entra ID Governance, the IGA (Identity Governance & Administration) solution within the Microsoft Entra portfolio. Microsoft Entra ID Governance is delivered as IDaaS (Identity as a Service). It allows simple and fast deployment of IGA capabilities with a good set of capabilities serving the requirements of a wide

by Martin Kuppinger

This KuppingerCole Executive View report looks at Microsoft Entra ID Governance, the IGA (Identity Governance & Administration) solution within the Microsoft Entra portfolio. Microsoft Entra ID Governance is delivered as IDaaS (Identity as a Service). It allows simple and fast deployment of IGA capabilities with a good set of capabilities serving the requirements of a wide range of customer use cases.

Sunday, 15. September 2024

KuppingerCole

Beyond ChatGPT: AI Use Cases for Cybersecurity

How can artificial intelligence be used in cybersecurity? Matthias and Alexei asked ChatGPT exactly this question and it came up with quite a list of use cases. They go through this list and discuss it. They explore the different forms of AI aside from generative AI, such as non-generative AI and traditional machine learning. They highlight the limitations and risks associated with large language

How can artificial intelligence be used in cybersecurity? Matthias and Alexei asked ChatGPT exactly this question and it came up with quite a list of use cases. They go through this list and discuss it. They explore the different forms of AI aside from generative AI, such as non-generative AI and traditional machine learning. They highlight the limitations and risks associated with large language models like GPTs and the need for more sustainable and efficient AI solutions.

The conversation covers various AI use cases in cybersecurity, including threat detection, behavioral analytics, cloud security monitoring, and automated incident response. They emphasize the importance of human involvement and decision-making in AI-driven cybersecurity solutions.

Here's ChatGPT's list of AI use cases for cybersecurity:

AI for Threat Detection: AI analyzes large datasets to identify anomalies or suspicious activities that signal potential cyber threats. Behavioral Analytics: AI tracks user behavior to detect abnormal patterns that may indicate compromised credentials or insider threats. Cloud Security Monitoring: AI monitors cloud infrastructure, detecting security misconfigurations and policy violations to ensure compliance. Automated Incident Response: AI helps automate responses to cyber incidents, reducing response time and mitigating damage. Malware Detection: AI-driven solutions recognize evolving malware signatures and flag zero-day attacks through advanced pattern recognition. Phishing Detection: AI analyzes communication patterns, spotting phishing emails or fake websites before users fall victim. Vulnerability Management: AI identifies system vulnerabilities, predicts which flaws are most likely to be exploited, and suggests patch prioritization. AI-Driven Penetration Testing: AI automates and enhances pen-testing by simulating potential cyberattacks and finding weaknesses in a network. Anomaly Detection in Network Traffic: AI inspects network traffic for unusual patterns, preventing attacks like Distributed Denial of Service (DDoS). Cybersecurity Training Simulations: AI-powered platforms create dynamic, realistic simulations for training cybersecurity teams, preparing them for real-world scenarios. Threat Intelligence: NLP-based AI interprets textual data like threat reports, social media, and news to assess emerging risks. Predictive Risk Assessment: AI assesses and predicts potential future security risks by evaluating system vulnerabilities and attack likelihood.


DHIWay

Decentralized Identity: It’s Not What You Think

In an increasingly digital world, proving who we are has never been more critical or misunderstood. The conversation around decentralized identity often suggests that it will replace the systems we’ve relied on for so long, tearing down the old to make way for the new. But that’s not the reality. These identity models aren’t adversaries […] The post Decentralized Identity: It’s Not What You Thin

In an increasingly digital world, proving who we are has never been more critical or misunderstood. The conversation around decentralized identity often suggests that it will replace the systems we’ve relied on for so long, tearing down the old to make way for the new. But that’s not the reality. These identity models aren’t adversaries locked in a battle for dominance; they are complementary forces that, when combined, can create a more secure, flexible, and empowering future for us all.

Think about it: our identity isn’t just a name, an ID card, or a social media profile. It’s a complex web of credentials, reputations, and relationships rooted in something deeply personal and sovereign—the name given to us at birth. This idea of identity is naturally decentralized. Yet, in today’s digital world, we are forced to rely on borrowed identifiers—like email addresses, mobile numbers, and social media accounts—that leave us vulnerable and powerless.

What if we could reclaim that sense of sovereignty in the digital realm? Imagine having a digital identity as uniquely ours as our name—one that we fully own and control, without ever compromising our privacy or security.

To bring this vision to life, we must rethink digital identity—not as a choice between centralized or decentralized systems, but as a fusion of their strengths. When these two approaches unite, they create a powerful framework of trust that offers more security, flexibility, and empowerment than either could achieve alone.

The Nature of Identity: Rooted in Sovereignty

To understand the future of digital identity, we need to start with a simple but powerful truth: our identities are inherently sovereign. From the moment we are born, our identities begin with our names—given to or chosen for us, not issued by any central authority. These names belong to us, and only us. Over time, they become associated with a rich tapestry of experiences, accomplishments, and relationships that form our reputations.

In the physical world, we build our identities by linking credentials to our names—birth certificates from governments, diplomas from universities, and membership cards from professional organizations. Each of these credentials contributes to the reputation of our names, like threads weaving together the fabric of who we are. No single entity controls all these threads; they come from diverse sources, adding depth and nuance to our identities.

But in the digital realm, this natural decentralization begins to unravel. Online, our identities are often reduced to borrowed credentials—an email address from a tech company, a social media profile, or a phone number managed by a telecom provider. Third parties control these digital identifiers, and don’t truly belong to us. They can be revoked, altered, or exploited without our consent.

What’s more, we lack control over our data. In the current model, we are compelled to hand over vast amounts of personal information to third parties for authentication and authorization. This means our data—our actions, preferences, and relationships—ends up in centralized databases that are often opaque and vulnerable. We have little say over how this data is collected, used, shared, or sold, making us passive participants in our digital lives.

This brings us to a critical realization: our current digital identities do not reflect the sovereignty and flexibility of our real-world selves. Instead, they are fragmented and vulnerable, exposed to misuse and exploitation, and ultimately subject to the control of entities whose interests may not align with ours.

But what if our digital identities could be as sovereign and flexible as the names we were given at birth? What if we could build digital reputations similarly—by linking credentials to identities we fully own and control? This is where the concept of cryptographic identifiers—a new digital foundation—comes into play.

The Core of Digital Identity: A Key Pair as Our Digital Name

Public key cryptography, a cornerstone of digital security for decades, lays the groundwork for a digital identity we truly own and manage ourselves. It revolves around a pair of cryptographic keys: a private key known only to us and a public key, which we can share with others. This key pair becomes the digital root of trust—an anchor for our online identity that remains under our control alone.

Think of the private key as our personal signature, kept secret and secure, while the public key acts like our digital name—something we can share openly and widely. Together, they create a powerful method to authenticate who we are online, without relying on any third-party provider. Just like the names given to us at birth, our digital key pair is unique and completely within our control.

But how does a key pair build trust? Here’s where it gets interesting.  Just as our real-world name gains recognition and credibility through our experiences, accomplishments, and relationships, our digital identity earns its reputation through credentials tied to our key pair. These credentials—whether issued by a government, a university, or a professional organization—are cryptographically signed and secured.

What makes this powerful is that these credentials are verifiable at any time by anyone who needs to confirm our identity, qualifications, or achievements—without ever having to return to the original issuer. This instant, trust-based verification protects our privacy. It empowers us to build and present our digital reputation with the same confidence and autonomy we enjoy in the physical world.

Building Our Digital Reputation: The Key Pair in Action

Think of our digital key pair as a blank canvas, ready to be filled with the credentials that define us. Over time, we can attach verifiable credentials to this key pair—our digital driver’s license, a degree from our university, or proof of employment from our company. Each of these credentials contributes to our digital reputation, enabling us to build trust without giving up control.

Imagine needing to prove our professional qualifications to a potential employer. Instead of submitting physical documents or scans, we present a set of digital credentials tied to our key pair. The employer can instantly verify these credentials, thanks to cryptographic proofs that confirm the appropriate authorities issued them. No lengthy checks or third-party databases are required—just immediate, secure trust.

This concept extends beyond professional credentials. Suppose we need to access an age-restricted service online. Rather than disclosing our full name, date of birth, and address, we can provide a signed cryptographic proof that simply confirms we meet the age requirement without revealing any other personal information. The service provider trusts this proof because it is tied to our key pair and backed by verifiable credentials issued by trusted entities.

Anchoring Identity with Multiple Key Pairs: Flexibility and Context

The power of a decentralized digital identity doesn’t stop with a single key pair. We can have multiple key pairs for different contexts—each serving a specific purpose or representing a unique aspect of our digital selves. For example, one key pair might be used for professional credentials, while another could be designated for personal interactions or healthcare records. This flexibility allows us to maintain privacy and security across various domains, ensuring that only relevant information is shared with the appropriate parties.

The World Wide Web Consortium (W3C) Decentralized Identifier (DID) standard makes adopting this approach feasible across different systems and platforms. DIDs enable us to create and manage multiple digital identities, each anchored by its cryptographic key pair, in a way that is interoperable and recognized by various services and organizations worldwide.

Owning Our Digital Identity: A New Paradigm

We reclaim sovereignty over our online lives by anchoring our digital identity to a key pair that only we control. We decide which credentials to share, with whom, and for how long. This approach fundamentally shifts the power dynamics, allowing us to build and manage our digital reputation just as we do in the real world—by accumulating trusted credentials over time.

This doesn’t mean eliminating centralized systems; instead, it integrates them into a more flexible, user-centric model. Governments, universities, banks, and other institutions continue to issue credentials, but now they do so in a way that respects our control over our identities. This isn’t about replacing one system with another; it’s about creating a bridge that combines the best of both worlds, where centralized trust meets decentralized control.

A Future Anchored by Sovereignty and Flexibility

The promise of a truly self-sovereign digital identity is no longer a distant dream. By combining the strengths of cryptographic technology and decentralized frameworks like DIDs, we can create a new digital identity paradigm that respects our privacy, protects our data, and places control back in our hands. This isn’t about tearing down existing systems; it’s about enhancing them, building bridges, and creating a digital future where our identities are secure, trusted, and uniquely ours.

With cryptographic key pairs and the W3C DID standard as the anchors of this new approach, we move towards a future where our digital identities are as secure, private, and flexible as our real-world selves. The journey starts now, with each of us reclaiming the power to own and manage our digital selves, navigating the digital realm with confidence and autonomy.

The post Decentralized Identity: It’s Not What You Think appeared first on Dhiway.


PROPERTY TOKENIZATION – REVISITING THE WHY BEHIND DEMATERIALISATION

The overall goal is to use technology to address India’s property-related legal and economic challenges. The Indian real estate market is a unique one, governed by countless laws, regulations, and state-level amendments which control, and prohibit, the purchase of land by non-domiciled Indian residents. As a rough rule of thumb, foreign nationals who do not […] The post PROPERTY TOKENIZATION –&n
India’s real estate market is complex, with strict regulations on property ownership. Land disputes are a major issue, accounting for 66% of civil cases and causing significant economic drain. Poor record-keeping and outdated land titles contribute to these disputes. The document proposes using blockchain technology and Verifiable Credentials (VCs) to create a more efficient, transparent, and secure system for managing land records and resolving disputes. Real estate tokenization is emerging as a solution, allowing fractional ownership and increased liquidity. A partnership between Rooba.Finance and Dhiway aims to combine asset tokenization and blockchain technology to innovate in this space.

The overall goal is to use technology to address India’s property-related legal and economic challenges.

The Indian real estate market is a unique one, governed by countless laws, regulations, and state-level amendments which control, and prohibit, the purchase of land by non-domiciled Indian residents. As a rough rule of thumb, foreign nationals who do not reside in India cannot have property registered in their names. PIOs and NRIs are restricted from buying agricultural, plantation, farm and other such land, though they are not prohibited from purchasing, selling or inheriting residential or commercial land save for one caveat – some states prohibit non-domiciled individuals from purchasing land of any type. 

An indicative list of central laws that govern the purchase of land follows:

Transfer of Property Act, 1882 Registration Act, 1908 Indian Stamp Act, 1899 Real Estate (Regulation and Development) Act, 2016 Benami Transactions (Prohibition) Act, 1988 Foreign Exchange Management Act (FEMA), 1999

For NRIs to purchase residential property, the following documents are necessary:

Passport and/or OCI Card PAN Card PoA registered for the specific transaction, if the NRI is not physically available for registration.

As regards agricultural land, all NRIs and PIOs are prohibited from purchasing it, though there is no bar on inheritance. However, in many states, even resident Indian citizens face restrictions relating to the purchase of land, or conversion of agricultural land to N.A. land by mutation. 

The long and short of it is, that India makes it hard to buy real estate, makes you undergo stringent documentation and has, for all intents and purposes, a set of federal and state level laws in place to adapt to its diversity. 

Despite this extensive legal system in place, an estimated 7.7 million people in India are affected by conflict over 2.5 million hectares of land, threatening investments worth more than Rs 14 lakh crore. Since land is central to India’s developmental trajectory, finding a solution to land conflict is a crucial policy challenge for the Indian government. Land disputes account for the largest set of cases in Indian courts – 25 percent of all cases decided by the Supreme Court involved land disputes, and surveys suggest that 66 per cent of all civil cases in India are related to land or property disputes. The average pendency of land acquisition cases, from  creation to resolution in the Supreme Court, is 20 years on average. Some reports indicate that more than two-thirds of litigation pertains to property. 

Data around Supreme Court (SC) cases is alarming. Cases pertaining to property  that manage  to reach the Apex Court at ‘Special Leave Petition’ or ‘Leave to Appeal’ stages are a mixed bag, ranging from land acquisition to conventional title disputes. To put it into perspective, the pecuniary jurisdictions of most states’ district courts have been raised to unlimited to ensure that High Courts do not get clogged by litigation. Up until 2015, litigants could approach High Courts directly to file property cases concerning properties over a certain value. Now, commercial disputes must all go to district courts at first, and require mandatory mediation in order to prevent lis (legal dispute) from being joined in the first place. Despite this, there is an alarming rate of litigation prevalent across all asset-value classes. This trigger-happy litigious mentality has ramifications beyond protracted pendency of cases. Individuals from lower socio-economic strata are unable to receive justice due to pendency in courts. Since they are unable to access quality legal advice, they often spend as long as 20 years or more litigating, generally on questions of title and devolvement of title. In principle, the Supreme Court must only deal with disputes concerning questions of law that have not been settled or require revisiting or interpretation. Broadly speaking, disputes with the highest incidence of percolating to the SC are Land Acquisition cases. By and large, as indicated by the figures above, 66% of all pending courts cases comprise property-related disputes, which can be bifurcated into private and against the state (land acquisition). Private disputes (between private parties, juristic or natural), can be further divided into those involving the title (competing title interests or encroachment) and those relating to devolvement (wills).

 

Cases which are not mediated or settled result in litigation, which has two economic outcomes. The first is that litigants lose money in hefty legal fees and the other is that the economy is detrimentally affected due to assets being locked in encumbrance. Without proposing some utopic litigation–free universe, what all can technology solve in such a status quo?

By 2040, real estate market will grow to Rs. 65,000 crore (US$ 9.30 billion) from Rs. 12,000 crore (US$ 1.72 billion) in 2019 and contribute 13% to the country’s GDP by 2025. Retail, hospitality, and commercial real estate are also growing significantly, providing the much-needed infrastructure for India’s growing needs. The problems at hand are economic drain to the people, a judicial strain to the infrastructure and is resulting in a lack of access to justice. 

The solution? Verifiable provenance through digital records. Over the last decade, concerted efforts have been made to shift towards building and deploying Digital Public Infrastructure to solve the problems pertaining to data within India. Currently, the lack of trustworthy records accounts for a significant amount of litigation as well as the inability of government schemes to function. There are significant errors and discrepancies in the maintenance logs of land records. In a study conducted in Rajasthan, in 24 percent cases, the difference between the area on record and the area measured was more than 20 percent. To compound this, land titles are often considered presumptive, meaning that the person currently occupying the land is assumed to be its owner. The same study revealed that the state ceased maintaining records of land possession in 1972, and there is no data on land possession at the tehsil level. As a result, title records are frequently outdated; the registered owner might have died or sold the property without updating the records, making it challenging to determine current ownership. 

Private disputes pertaining to joint ownership also take root in poor record-keeping. It gets particularly tricky when succession cases are instituted well into the future, sans any verifiable records. In India, devolvement follows religious or custom-based inheritance by default, unless expressly revoked by a will, thereby choosing testamentary succession (a quagmire of litigation in itself). All this has a detrimental impact on the ease of doing business rankings, specifically in respect of contract enforcement and property registration. India is currently ranked 163rd and 166th, respectively, on the abovementioned fronts. Both these factors, once again, are greatly affected by India’s persistent problem: an overwhelming number of land litigations.

In the early 90s, humanity was at the dawn of personal computing and the era of the internet. Juxtaposed to this groundbreaking advancement, India witnessed one of the largest scale financial frauds ever, the Harshad Mehta Scam. In this backdrop, the Securities and Exchange Board of India (SEBI) identified authenticity of securities as a paramount concern, and a hole to be plugged. By 1996, demat was mandated across public securities markets, ushering in an era of depositories, clearing corporations, registrar-cum-transfer agents and stock exchanges. SEBI used regulated intermediaries to ensure the safety and security of individuals participating in India’s securities markets. 

Till date, some sectors of financial markets, such as private markets, have been left largely untouched by digitisation or dematerialisation. This has resulted in information asymmetry and data silos, culminating in opaque markets, inefficiencies in transactability and a lack of trust. At this juncture, we need to look towards innovative technology solutions to improve the sourcing, sharing and verification of data which assists the public in making financial decisions. At present, in 2024, we are witnessing increasing use cases of DLT and AI, and it seems only fitting that as we consider the evolving avatar of the internet, we must adopt and adapt or risk being mired in legacy market inefficiencies. In recent years, real estate tokenization has emerged as an unconventional investment option with advantages for both issuers and investors. The real estate sector now makes up about 40% of the digital securities market, amounting to approximately $200 million. Real estate tokenization typically turns a property’s value into a token that can be transferred and owned digitally by storing it on a blockchain. These fractional shares of ownership in the real estate are represented by these divisible tokens. A reliable database is necessary for private markets to become more liquid. Instead of being centralised, we think that this new database will be distributed and owner-controlled.

So, how does the Finternet Project and its contributors aim to solve this population-scale problem of verifiable data? 

The vision of the Finternet is to build a set of rails for a user-centric ecosystem that unifies various fractured and siloed ecosystems using universal principles translated through technology. In the narrow compass of real estate, availability of authenticated data relating to property will unlock the hidden financial potential of a traditionally illiquid asset, remedying a major cause of litigation in India. 

Verifiable Credentials

Finternet can revolutionise the administration and evidence process for dispute-resolution by integrating advanced digital tools and decentralised technologies. Through blockchain, it ensures that records and evidence are digitised and immutable, providing a reliable and tamper-proof source of truth. Verifiable Credentials (VCs) allow for instant authentication and verification of evidence, streamlining the process and ensuring authenticity. Real-time data access and transparency are enhanced, allowing for quicker decision-making. 

VCs are digital certificates that can be used to prove the authenticity of information regarding an individual, organization or an asset. These credentials are stored securely and can be presented and verified in a decentralised manner, without the need for intermediaries. VCs are particularly useful in scenarios where trustworthiness is a priority, like in the case of property disputes.

In the context of property, verifiable credentials can be employed to:

Authenticate Property Ownership: VCs can be issued by government authorities or trusted entities to certify ownership of a property. These credentials can be cryptographically verified by any party, ensuring that the ownership claim is legitimate and reducing the likelihood of fraudulent claims. Streamline Property Transfers: During property transfers, VCs can be used to verify the identities of the parties involved, as well as the authenticity of the property title. This can significantly reduce the time and cost associated with the transfer process, as it eliminates the need for extensive paperwork and third-party verification. Resolve Title Disputes: In cases where there is a dispute over property ownership, VCs can serve as tamper-proof evidence of ownership history. The use of VCs can expedite the resolution process by providing courts or arbitration bodies with a clear, verifiable record of ownership, thus reducing the duration and complexity of litigation. Improve Transactability: By using VCs, all parties involved in a property transaction can have access to verified and up-to-date information. This transparency helps in faster business decisions such as loans-against-property, home loans, credit decisions, etc.  Integrate with Smart Contracts: VCs can be integrated with smart contracts to automate the execution of agreements based on verified conditions. For instance, a smart contract could automatically release payment upon the verification of a property transfer credential, ensuring that both parties fulfill their obligations.

By leveraging VCs within the property sector, India can move towards a more efficient, transparent and secure system of managing land records and resolving disputes. This technology has the potential to reduce the burden on the judiciary, minimise economic losses due to encumbered assets, and enhance the overall ease of doing business in the country.

Conclusion

The Indian real estate market faces significant challenges due to complex regulations, widespread land disputes, and outdated record-keeping systems. These issues result in economic inefficiencies, overburdened courts, and barriers to investment and development.

However, emerging technologies offer promising solutions to these long-standing problems. The integration of blockchain technology, Verifiable Credentials, and asset tokenization has the potential to revolutionize property management and transactions in India. By creating a more transparent, secure, and efficient system for recording and verifying property ownership, these innovations could:

Reduce the number of property-related disputes Streamline property transfers and reduce associated costs Improve access to justice by providing clear, verifiable records Enhance the liquidity of real estate assets through tokenization Attract more investment to the real estate sector

The path forward involves continued development of these technologies, their integration into existing legal and administrative frameworks, and widespread adoption by stakeholders in the real estate sector. While challenges remain, the potential benefits of this technological revolution in property management are substantial and could transform India’s real estate landscape in the coming years.

Dhiway and Rooba alliance

Rooba.Finance and Dhiway are strategically collaborating to harness their respective strengths in asset tokenization and blockchain technology, driving innovation in the financial and property sectors. Rooba.Finance, with its expertise in asset tokenization, is pioneering the creation of digital representations of real-world assets, allowing for fractional ownership and enhanced liquidity in the market. Dhiway, a leader in blockchain-based infrastructure, provides the robust, secure, and transparent technology backbone necessary to support these digital assets. By integrating Dhiway’s advanced blockchain solutions, Rooba.Finance ensures that each tokenized asset is securely documented, traceable, and compliant with regulatory standards. This partnership not only facilitates the creation of new investment opportunities but also advances the secure and efficient management of digital assets, paving the way for a more decentralized and democratized financial ecosystem.

The post PROPERTY TOKENIZATION – REVISITING THE WHY BEHIND DEMATERIALISATION appeared first on Dhiway.

Friday, 13. September 2024

Anonym

Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity 

For nearly two years, Anonyome Labs has co-maintained an open source project from Hyperledger called Aries-VCX. VCX is an important decentralized identity (DI) community project, which provides the backbone for other DI software products, such as our own Sudo Platform DI Edge Agent SDK for native mobile applications. In this article, we will explore the […] The post Aries VCX: Another Proof Poin

For nearly two years, Anonyome Labs has co-maintained an open source project from Hyperledger called Aries-VCX. VCX is an important decentralized identity (DI) community project, which provides the backbone for other DI software products, such as our own Sudo Platform DI Edge Agent SDK for native mobile applications. In this article, we will explore the details of this project, Anonyome’s contributions, and what’s next for this exciting project. 

What is Aries-VCX? 

Aries-VCX is a project under the Hyperledger Aries group. This group strives to provide complete toolkits for DI solutions and digital trust, including the ability to issue, store and present verifiable credentials with maximum privacy preservation, and establish confidential, ongoing communication channels for rich interactions. VCX sits alongside other popular projects such as Aries Cloud Agent Python (ACA-Py) and Credo (formerly Aries Framework JavaScript under Hyperledger). 

While these projects pursue a similar goal, they complement each other nicely. VCX is written primarily in Rust and targets both cloud and mobile native consumers. By comparison, Credo targets cloud and mobile JavaScript consumers, and ACA-Py targets only cloud consumers. Support for native mobile consumers was an essential goal when building the technology stack for Anonyome’s Edge Agent SDK and all other Sudo Platform SDKs, because providing native SDKs gives our consumers flexibility when integrating into their mobile applications and doesn’t limit them to JavaScript or React Native based environments. 

Further, VCX differs from other Aries projects in that it has historically focused on providing lower-level building blocks for DI SDKs and applications rather than batteries-included DI frameworks for consumers to pick up. We fully appreciate the low-level components because they give us the flexibility to design Anonyome’s Edge Agent SDK with an optimised internal engine and easy-to-use APIs that are in line with our Sudo Platform standards. However, VCX’s lower-level approach also presents a higher barrier to entry for other SDKs and applications to consume. 

Brief history of VCX 

VCX has been around since 2017 and is one of the first implementations of an Aries protocol-compliant library. Evernym created the original library, which was eventually moved into the Hyperledger Indy SDK project. This was to serve as a reference implementation for integrating with the Indy SDK for the Aries protocols. In 2020, the project was moved into a dedicated Hyperledger project by Absa Group, beginning a new era of development beyond the Indy SDK. 

VCX today provides a DI toolbox with a large suite of functionality that Anonyome and others in the industry use. The toolbox includes: 

DIDComm V1: VCX supports DID Communication V1, allowing end-to-end-encrypted messages to be encoded and decoded between DIDs.  Aries protocols: VCX provides tools for stepping through various agent-to-agent protocols defined by Aries. The protocols implemented in VCX allow the agent to engage with other agents to establish new secure connections, issue or receive credentials, present or verify a presentation of credentials, exchange text-based messages, and more. The latest list of supported protocols is here.  DID management: DIDs are foundational to DI, and VCX has invested time in creating a reliable and clean set of DID management tools for a range of different DID methods. This allows consumers to easily resolve, create and update DIDs involved in their DI interactions. This toolbox is designed with extensibility in mind, allowing new DID methods to be added in the future for further interoperability.   Anonyome’s journey with VCX 

In our pursuit of creating a highly optimized and secure Edge Agent SDK, we wanted to bring into our technology stack the latest cutting-edge DI and Aries libraries. However, given the history we’ve just outlined, VCX in 2022 was highly tethered to the Indy SDK—an SDK that was unfortunately heading towards deprecation at the time. As a strong believer in and adopter of VCX, we set out to join VCX and contribute a major pivot to the project: decoupling VCX from the Indy SDK. This was a major refactor that other Aries projects, such as ACA-Py, also had to work through around this time.  

The changes allowed consumers to plug in and use modern Indy SDK replacement components (Aries Askar, Indy VDR, Anoncreds-rs) instead. In practice, this means users benefit from receiving the latest features and optimizations from these libraries, as well as better interoperability (e.g., a larger range of Decentralized Identifier (DID) methods beyond Indy-based DID methods). 

Shortly after Anonyome’s contribution, in early 2023 we became a co-maintainer of the VCX project and we have worked alongside other individuals and companies such as Absa Group and Instnt. Since joining, Anonyome has contributed to a wide range of aspects in VCX, such as: 

Kickstarting a modern foreign function interface (FFI) wrapper using Mozilla’s UniFFI, allowing the Rust library to be consumed natively from Android and iOS  Implementing some of the latest Aries Interop Protocols (AIP2 credential issuance and presentation messages)  Contributing to the Aries Agent Test Harness on behalf of VCX, an effort that allows VCX to be benchmarked for interoperability with other Aries agents (such as ACA-Py and Credo)  Performing regular maintenance duties: contributing to architectural design decisions, codebase housekeeping, assisting the VCX community, and participating in regular community meetings.  What’s next for VCX? 

VCX has come a long way since its beginnings with Indy SDK: it’s advanced from an Indy reference implementation into a rich and extensible toolbox for DI operations, Aries, DIDs, DIDComm, AnonCreds, and so on. But VCX development is not slowing down, especially since the standards rapidly iterate and grow in the DI ecosystem. 

VCX is keeping its eye on what the community is asking for, and where the ecosystem is heading. A few notable items ahead include: 

DIDComm V2: Currently VCX is using DIDComm V1 for message transport and structuring in the Aries protocols it supports, but the next iteration of the standard—DIDComm V2—is now progressively rolling out into the Aries community. VCX plans to be a part of this transition.  VCX framework: As mentioned, VCX has historically been a lower-level “toolbox” for DI operations, which is great for flexibility but hinders broad adaption. Our co-maintainer and contributors at Instnt are now working on building a framework on top of VCX, an initiative to provide a more application-friendly interface (like ACA-Py and Credo).  DID toolbox enhancements: Since the move away from Indy, VCX has pursued supporting a wider range of DID methods from other blockchain and non-blockchain-based ecosystems, such as did:web and the latest did:peer specification. VCX will continue growing support for DID methods, building a rich and clean toolbox for “all things DIDs”. 

Anonyome is very excited for the future of VCX and we’re glad we were a part of the journey thus far as a co-maintainer. We’d like to give a huge thanks to the co-maintainers and contributors who have made VCX what it is today—open-source thrives most with a diverse community behind it. 

If you’d like to join the VCX efforts, or just hear more about what we’re doing, feel free to join our biweekly community meeting or reach out on Discord

The post Aries VCX: Another Proof Point for Anonyome’s Commitment to Decentralized Identity  appeared first on Anonyome Labs.


paray

Practical Steps for Advising on BOIR Compliance

When advising clients on filing FinCEN’s Beneficial Ownership Information (BOI) reporting obligations, professionals should offer clear, practical guidance to ensure compliance and mitigate potential risks.  It is obviously helpful to start out by educating small business clients on the fundamentals of BOIR filing:    – Who needs to file: Explain that most small corporations, LLCs, …
When advising clients on filing FinCEN’s Beneficial Ownership Information (BOI) reporting obligations, professionals should offer clear, practical guidance to ensure compliance and mitigate potential risks.  It is obviously helpful to start out by educating small business clients on the fundamentals of BOIR filing:    – Who needs to file: Explain that most small corporations, LLCs, … Continue reading Practical Steps for Advising on BOIR Compliance →

KuppingerCole

cidaas access management

by John Tolbert This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage identity access to complex IT infrastructures. A technical review of the cidaas access management platform is included.

by John Tolbert

This KuppingerCole Executive View report looks at the issues and options available to IT managers and security strategists to manage identity access to complex IT infrastructures. A technical review of the cidaas access management platform is included.

Decentralized Identity: Potential for Breakthrough Innovation

by Martin Kuppinger Decentralized Identity (DCI) has evolved over more than a decade and is reaching the tipping point for widespread adoption and triggering massive innovation in how businesses and governments interact with customers, consumers, employees, or citizens. From centralized identity siloes to decentralized identity wallets DCI, also referred to as SSI (Self-Sovereign Identity), i

by Martin Kuppinger

Decentralized Identity (DCI) has evolved over more than a decade and is reaching the tipping point for widespread adoption and triggering massive innovation in how businesses and governments interact with customers, consumers, employees, or citizens.

From centralized identity siloes to decentralized identity wallets

DCI, also referred to as SSI (Self-Sovereign Identity), is a concept that differentiates fundamentally from established models. Commonly, organizations manage identities of the individuals in their own systems, creating siloes of identities and causing individuals to register with many different parties. Everyone experiences this on an almost daily basis when using the Internet. While some identities such as the ones of LinkedIn, Facebook, Google, or Apple can be reused, they still are centralized and not ubiquitous.

In contrast, DCI leaves the identity and its attributes with the individual. Based on standards, that information can be flexibly exchanged with other parties. So called verifiable credentials (VCs) provide information for instance about the name, the email address, the postal address, the employer, the employment status, or any other information. The concept of DCI is open and does not limit what could be provided with VCs. This is essential, because this enables using DCI for any type of use case, especially because also things, devices, or organizations could (and will, over time) have their decentralized identities.

DCI builds on a concept of issuers that issue VCs, holders – commonly the individuals – that hold VCs, and verifiers that consume VCs. The VCs are stored by the individual in so-called wallets. Over time, the term wallet may turn out to be misleading, because we potentially will have way more information in the form of VCs in the wallet than we have cards in our wallets today. Also, the use cases will become much broader.

Decentralized identity: More than just verification, onboarding and authentication

DCI today is frequently seen as a means for having a verified identity, based on human-assisted or fully automated IDV (Identity Verification) processes, on hand that is reusable. This enables trusted interactions with other parties such as organizations or governmental agencies.

The VCs then provide additional data and can for instance simplify the onboarding process such as registering with an eCommerce site. Based on the verified identity, the secure wallet, and the ability to open that wallet, authentication processes can become simplified.

However, looking just at these aspects is only scratching the surface of the potential that DCI holds. The potential is much bigger. VCs can be used for process automation and optimization. Envision onboarding of externals to a project. This process can become fully automated based on the name, the employer, the employment status and some other information. Or envision applying for a loan at a bank, based on other VCs, ranging from the verified identity to the monthly salary statements, marital status, proof of existing real estate, and so on. The costly AML (Anti Money Laundering) and KYC (Know Your Customer) processes in banks would sink massively, as well as the cost for approving (or rejecting) loans. Process cost optimization is a massive potential of DCI.

But there is more. Consent could be managed by VCs that allow the use of certain information by defined parties for a defined purpose and limited time. People could share health data in a controlled manner as VCs. The potential is virtually infinite and allows for breakthrough innovation in the digital economy.

Breakthrough potential: Disruption in business that does not break IT

DCI can become disruptive to the business, with organizations that leverage the potential of DCI winning by delivering new, innovative services, but also optimizing their processes and thus cost. We expect that with the recent eIDAS 2.0 regulation, which amongst other changes mandates EU member states to provide DCI wallets, the EU DI wallets (EU Decentralized Identity) to every citizen and to adopt this technology for eGovernment use cases, there is a driver for significantly increasing the speed in adopting DCI approaches. These wallets are a foundation for implementing further DCI use cases.

Fortunately, disruption in business does not equal disruption in IT. DCI adds to what exists. When a customer is registered via DCI and purchases goods, this is still reflected by records in the ERP system of the organization. When someone is onboarded, there still might be an entry in an internal directory.

Just adding DCI to the forefront of the organization will not allow leveraging the full potential, though. Consuming VCs to make decisions, from access authorizations to process automation, requires changes in the backends. In many cases, this will be an evolutionary process.

With the immense potential of DCI, it is the latest time that organizations start evaluating that potential and think about the innovation that it can bring to their business or the way governments serve their citizens. This must involve everyone in the organization, not just the identity team.

As a guest of Ergon Informatik, Martin Kuppinger, Principal Analyst at KuppingerCole Analysts, will talk about this topic more in depth at the it-sa Expo & Congress in Nuremberg on October 23rd.


Metadium

CertiK Skynet

CertiK Skynet Dear Community, We are pleased to share the latest update on Metadium’s progress with CertiK Skynet. In our commitment to the continuous development and trust of the Metadium project, we prioritize enhancing security and transparency. As part of this effort, Metadium has recently completed a security audit and KYC certification with CertiK Skynet. What is CertiK Sk

CertiK Skynet

Dear Community,

We are pleased to share the latest update on Metadium’s progress with CertiK Skynet.

In our commitment to the continuous development and trust of the Metadium project, we prioritize enhancing security and transparency. As part of this effort, Metadium has recently completed a security audit and KYC certification with CertiK Skynet.

What is CertiK Skynet?

CertiK Skynet is a platform that monitors and evaluates the security and reliability of blockchain and cryptocurrency projects in real-time. It provides services related to security audits of smart contracts and blockchain systems. Skynet focuses on continuously monitoring each project’s smart contracts and detecting potential threats.

Smart Contract Audits: CertiK rigorously reviews and analyzes the code of smart contracts to identify vulnerabilities and weaknesses that malicious actors could exploit. This process ensures that blockchain projects are secure and trustworthy. Penetration Testing: The company conducts thorough penetration testing to simulate potential attacks, safeguarding blockchain systems from hacks and security breaches. Security Monitoring: CertiK offers ongoing monitoring of blockchain projects to identify and address potential threats in real time. Skynet: CertiK’s automated security and monitoring tool provides real-time insights, on-chain monitoring, and automated auditing.

Smart contracts are a core technology in cryptocurrency projects, essential to enhance project efficiency, transparency, and trustworthiness. Through this technology, projects can operate autonomously and offer users and investors a high level of security.

Key Achievements:

CertiK Security Score increased by 5.88 points. Security Score Rank rose by 513 positions. Obtained KYC certification badge. Key Highlights:

CertiK Skynet Audit: Metadium has confirmed the safety of its platform’s code and systems through a thorough security audit by CertiK Skynet. Twenty-nine items were approved and improved during this audit, and the code audit score increased by 23.68 points.

KYC Certification:

Additionally, Metadium has enhanced the transparency of its platform operations through CertiK Skynet’s KYC certification process. KYC certification is a critical procedure that verifies the project team’s identity and assesses compliance with anti-money laundering (AML) regulations. CertiK’s KYC service maintains the highest standards of data protection while providing rigorous scrutiny of the project team’s personal identity and background.

CertiK’s investigators validate cryptocurrency development teams and award a “KYC Badge” to those who successfully pass the due diligence process. This badge enhances the project team’s accountability and trustworthiness while reducing and mitigating risks of fraud and abuse. Metadium has obtained this badge, demonstrating its adherence to laws and regulations.

CertiK Skynet Score:

As a result of all these processes, Metadium’s CertiK Skynet rank and score have improved. This score reflects a comprehensive evaluation of Metadium’s security, stability, and public aspects, reaffirming the project’s technical excellence and reliability to the market.

The Metadium team is committed to continuing to build an even safer and more reliable platform. The audit and certification through CertiK Skynet are just the beginning, and we will consistently strive to maintain your trust.

Thank you for your continued support.

Metadium Team

메타디움 CertiK Skynet 업데이트 소식을 전해드립니다.

메타디움 프로젝트의 지속적인 발전과 신뢰를 위해, 우리는 보안과 투명성 강화를 최우선 과제로 삼고 있습니다. 이러한 노력의 일환으로 메타디움은 최근 CertiK Skynet에서 보안 감사 및 KYC 인증을 성공적으로 완료하였습니다.

CertiK Skynet이란?

CertiK Skynet은 블록체인 및 암호화폐 프로젝트의 보안 및 신뢰성을 실시간으로 모니터링하고 평가하는 플랫폼 입니다. 스마트 계약과 블록체인 시스템의 보안 감사와 관련된 서비스를 제공합니다. Skynet은 각 프로젝트의 스마트 계약을 지속적으로 모니터링하고 잠재적인 위협을 감지하는 데 중점을 둡니다.

스마트 계약 감사: CertiK는 스마트 계약의 코드를 엄격하게 검토하고 분석하여 악의적인 공격자들이 악용할 수 있는 취약점을 식별합니다. 이 과정은 블록체인 프로젝트의 보안성과 신뢰성을 보장합니다. 침투 테스트: 회사는 잠재적인 공격을 시뮬레이션하여 블록체인 시스템을 해킹과 보안 침해로부터 보호하는 철저한 침투 테스트를 수행합니다. 보안 모니터링: CertiK는 블록체인 프로젝트를 실시간으로 모니터링하여 잠재적인 위협을 식별하고 대응합니다. Skynet: CertiK의 자동화된 보안 및 모니터링 도구는 실시간 인사이트, 온체인 모니터링, 자동화된 감사를 제공합니다.

주요 성과

CertiK Security Score 5.88점 상승 Security Score Rank 513계단 상승 KYC 인증 배지 획득

주요 내용

CertiK Skynet Audit:

메타디움은 CertiK Skynet의 철저한 보안 감사(Audit)를 통해 플랫폼의 코드와 시스템의 안전성을 확인받았습니다.

이번 총 29개의 항목에 대해 Audit을 진행했으며, 코드 점수가 23.68점 상승했습니다.

KYC 인증

또한, 메타디움은 CertiK Skynet의 KYC 인증 절차를 통해 플랫폼 운영의 투명성을 높였습니다.

KYC 인증은 프로젝트 팀의 신원을 확인하고, 자금 세탁 방지(AML) 규정을 준수하는지를 평가하는 중요한 절차입니다.

CertiK의 KYC 서비스는 가장 높은 수준의 데이터 보호 표준을 유지하는 동시에 엄격한 심사 과정을 통해 프로젝트팀의 개인 신원 및 배경 검증을 제공합니다.

CertiK의 자체 조사관은 암호화폐 개발 팀을 검증하여 실사 과정을 성공적으로 통과한 팀에게 “KYC 배지”를 제공합니다. 이 배지는 프로젝트 팀의 책임성과 신뢰를 높이는 동시에 사기 및 남용 위험을 줄이고 완화합니다.
메타디움은 이번 인증을 통해 KYC 배지를 획득했고, 메타디움이 법규와 규정을 준수하고 있음을 입증하게 되었습니다.

CertiK Skynet Score

이 모든 과정의 결과로, 메타디움의 CertiK Skynet 점수 및 랭킹이 향상되었습니다.

이 점수는 메타디움 프로젝트의 보안성, 안정성, 공공성을 종합적으로 평가한 결과로, 메타디움의 기술적 우수성과 신뢰성을 시장에 다시 한번 입증한 것입니다.

저희 메타디움 팀은 앞으로도 더욱 안전하고 신뢰할 수 있는 플랫폼을 만들기 위해 최선을 다할 것입니다. CertiK Skynet을 통한 감사와 인증은 그 첫걸음일 뿐, 앞으로도 여러분의 신뢰를 저버리지 않도록 꾸준히 노력하겠습니다.

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

CertiK Skynet was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Thursday, 12. September 2024

KuppingerCole

The Security You Need: Seamlessly Integrating PAM and IGA for Ultimate Protection

In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their securit

In today's rapidly evolving cybersecurity landscape, organizations face significant challenges in integrating Privileged Access Management (PAM) and Identity Governance and Administration (IGA) systems. The complexity of integration, especially with legacy systems, coupled with the need to scale for cloud environments, poses substantial hurdles for IT professionals seeking to enhance their security posture.

Modern technology offers solutions to these challenges through unified identity platforms. These platforms enable organizations to manage security from on-premises to cloud environments with modular, integrated solutions across IGA, IAM, PAM, and Active Directory Management and Security. By leveraging API-first approaches and identity correlation systems, businesses can achieve seamless integration, reduce operational risks, and support agile just-in-time scenarios.

Paul Fisher, Lead Analyst at KuppingerCole, will discuss the latest trends in PAM and IGA integration, highlighting the importance of a unified approach to identity security. He will explore the challenges organizations face in implementing these systems and offer insights into overcoming common obstacles, ensuring compliance, and maintaining robust governance in an ever-changing threat landscape.

Jason Moody, Global Product Marketing Manager, PAM, and Bruce Esposito, Global Product Marketing Manager, IGA, both from One Identity, will showcase their Unified Identity Platform. They will demonstrate how this solution addresses identity sprawl, enhances business agility, and supports both internal and external users. The speakers will also highlight One Identity's approach to integrating PAM and IGA, emphasizing its flexibility and scalability.




Finicity

Nacha’s Preferred Partner offerings evolve to include open banking and account validation

As governor of the automated clearing house (ACH) Network that moves $80 trillion in funds electronically each year, U.S. payments industry association Nacha has been moving payments forward for 50… The post Nacha’s Preferred Partner offerings evolve to include open banking and account validation appeared first on Finicity.

As governor of the automated clearing house (ACH) Network that moves $80 trillion in funds electronically each year, U.S. payments industry association Nacha has been moving payments forward for 50 years. In recognition of the tremendous, data-driven changes shaping the industry in just the last few years, Nacha updated the categories for its Preferred Partner Program.

Nacha selects Preferred Partners, including Mastercard, whose payments technology offerings align with Nacha’s network advancement strategy. Mastercard Open Banking services are provided by Finicity, which has been a Nacha preferred partner in all partner solutions categories — previously defined as Compliance, Risk and Fraud Prevention, and ACH Experience — since 2020.

Going forward, Mastercard will continue to provide advanced, secure and trusted payment solutions as a Nacha Preferred Partner in three key areas: Risk and Fraud Prevention, as well as new categories Account Validation and Open Banking. These solutions are integral to the future of digital payments.

The power of consumer-permissioned data

Account-to-account (A2A) consumer bill payments and transfers totaled $9 trillion in 2023, and continue to grow at a 7% compound annual rate, according to Nacha, driven by consumers’ choice for fast and convenient payment options. Failed payments and fraudulent charges can be costly and take time to resolve. So it’s critically important to protect A2A payments with insights and analytics that keep risk and cost to a minimum.

Ensuring secure and successful digital payments starts with a robust account validation process to verify critical details like account type, ownership and balance information. These solutions not only help optimize payments, reduce risk and lower costs for fintechs and merchants, they enable the safe and seamless payment experiences that end users demand. Mastercard Open Banking for Payments solutions include:

Account Owner +: Verify identity by analyzing risk signals, insights and scores related to personal information, device details and IP addresses. Account Payment Details: Retrieves account and routing numbers and indicates real-time payment availability. Balances: Gathers insights from cleared and available balances and time stamps, with a dynamic recency setting. Payment Success Indicator: De-risks payments with predictive insights from a weighted, multifactor settlement risk score.

Mastercard’s advanced global network and decades of experience in risk and fraud prevention can help fintechs and merchants make smarter decisions in a fast-moving digital payments landscape. Ultimately, we strive to help our customers, partners and end users realize all the benefits of next-generation A2A payment technologies with the lowest possible risk.

To learn more about Mastercard Open Banking for Payments, click here.

The post Nacha’s Preferred Partner offerings evolve to include open banking and account validation appeared first on Finicity.


Spruce Systems

Meet the SpruceID Team: Parke Hunter

Parke, SpruceID’s marketing manager, combines marketing expertise and customer focus to help drive success.
Name: Parke Hunter
Team: Marketing
Based in: Denver, Colorado About Parke

After getting my marketing degree from Virginia Tech (Go Hokies!), I landed my first job selling commercial insurance at GEICO—fun fact: I got to be the GEICO Gecko for a day.

I then transitioned into working in software implementation and customer success at a food service tech company. Still wanting to pursue a career in marketing while being able to continue working closely with the product development team and customers, I found my love for product marketing. I went on to work as a product marketing manager for a range of products (from data analytics software tools to Atlassian’s app development platform) for five years at Alteryx, Sisense, and Atlassian.

I started at SpruceID last year and have loved every minute of it! It's exciting to see how the company has grown throughout my time here, and I have had the opportunity to experiment and try my hand at other areas of marketing that I may not have been as familiar with before.

Parke as GEICO Gecko Can you tell us about your role at SpruceID?

At SpruceID, my role spans managing our content funnel, social media, and customer highlights/case studies and helping support certain events such as hackathons, business development, and website updates. We are also gearing up to build out our product marketing function, which I am looking forward to.

What do you find most rewarding about your job?

What’s most rewarding about my job is that I feel that my work really impacts our company and mission. I feel driven and motivated by how our products help people.

Also, I may be biased, but our team is the best. SpruceID is made up of some of the smartest, kindest, and most fun individuals I have ever met. They are supportive, encouraging, and come together to work as a team and achieve a goal in a way I have never seen before.

What is the most important quality for someone in your role to have?

I think that the most important quality in a marketer is curiosity. 

Curiosity for understanding customers and personas, as well as the industry you're in, spotting trends in data, problem-solving, and adapting to change in case business needs shift and you have to learn new skills.

What has been the most memorable moment for you at SpruceID so far?

There have been so many it’s hard to choose!! One certainly stands out, though. At our fall 2023 offsite in Dublin, I was plucked from the crowd in an Irish pub to do an Irish jig on stage in front of hundreds of locals (and the entire company who I had just met in person for the first time!).

The moment we launched the California mDL was also a special and memorable moment for me.

How do you define success in your role, and how do you measure it?

There are so many ways our marketing team defines and measures success, from top to bottom of funnel.

We measure everything from brand awareness to lead generation, revenue growth, content engagement metrics, customer feedback, and awards/recognition, just to name a few. In marketing, we are also constantly evaluating the competitive landscape and understanding where we fit into it. As SpruceID grows, I know we’ll track more success metrics.

I am data and metrics-driven, and I define success in my role by the impact my work has on driving measurable results. Success to me means continuously learning, improving, and contributing to SpruceID's overall growth and strategic goals.

Fun Facts

What do you enjoy doing in your free time? In my free time, you can find me road-tripping, hiking or snowshoeing as one does in Colorado, watching reality TV, studying (I am currently getting my master's degree online), and hanging out with friends! I recently started Denver’s first “Food Critics Club” with a group of friends. We set out to taste-test a certain type of food (e.g., all of the croissants or empanadas in Denver) and have a picnic to try them all and rate them. That has been a blast!

If you could be any tree, what tree would you be and why? I would be a palm tree! Calm, resilient, and adaptable. Palm trees seem relaxed, go with the flow, and thrive in the sun (like me), but they are also much tougher than they seem and can weather wind and storms.

Interested in joining our team? Check out our open roles and apply online!

Join Our Team

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

Nov 19, 2024: Identity Security and Management – Why IGA Alone May Not Be Enough

Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility and control.
Organizations are confronted with unprecedented challenges in managing and securing identities across hybrid environments due to the growing complexity of the digital landscape. While Identity Governance and Administration (IGA) solutions provide a foundation, the increasing complexity of identity ecosystems demands a more comprehensive approach to maintain visibility and control.

Ocean Protocol

DF106 Completes and DF107 Launches

Predictoor DF106 rewards available. DF107 runs Sept 12 — Sept 19, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 106 (DF106) has completed. DF107 is live today, Sept 12. It concludes on September 19. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE&n
Predictoor DF106 rewards available. DF107 runs Sept 12 — Sept 19, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 106 (DF106) has completed.

DF107 is live today, Sept 12. It concludes on September 19. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF107 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF107

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF106 Completes and DF107 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

KYC (Know Your Customer) Checklist: Simplified

Achieve KYC compliance with our comprehensive checklist, including documents, best practices, and identity verification tips.

Know Your Customer (KYC) programs are a way for financial institutions to verify the identity of their clients. Not only does it help ensure compliance with government regulations, but KYC is also an important step in preventing fraud and other illegal financial activities. Without it, businesses in the financial sector could be subject to government penalties and a loss of customer trust. In this article, we’ll take a deeper look at KYC best practices and run through an easy-to-understand compliance checklist.

Wednesday, 11. September 2024

Microsoft Entra (Azure AD) Blog

Omdia’s perspective on Microsoft’s SSE solution

In July, we announced the general availability of the Microsoft Entra Suite and Microsoft’s Security Service Edge (SSE) solution which includes Microsoft Entra Internet Access and Microsoft Entra Private Access.     Microsoft’s vision for SSE   Microsoft’s SSE solution aims to revolutionize the way organizations secure access to any cloud or on-premises applications. It unif

In July, we announced the general availability of the Microsoft Entra Suite and Microsoft’s Security Service Edge (SSE) solution which includes Microsoft Entra Internet Access and Microsoft Entra Private Access.  

 

Microsoft’s vision for SSE

 

Microsoft’s SSE solution aims to revolutionize the way organizations secure access to any cloud or on-premises applications. It unifies identity and network access through Conditional Access, the Zero Trust policy engine, helping to eliminate security loopholes and bolster your organization’s security stance against threats. Delivered from one of the largest global private networks, the solution ensures a fast and consistent hybrid work experience. With flexible deployment options across other SSE and networking solutions, you can choose to route specific traffic profiles through Microsoft’s SSE solution.

 

Omdia's perspective

 

According to Omdia, a leading research and consulting firm, Microsoft’s entry into the SASE/SSE space is poised to disrupt the market. Omdia highlights that Microsoft’s focus is on an identity-centric SASE framework, which helps consolidate technologies from different vendors by extending identity controls to your network and enhancing team collaboration. A key strength for Microsoft, according to Omdia, is its ability to introduce Microsoft Entra Internet Access and Microsoft Entra Private Access seamlessly into existing identity management conversations—a strength that could lead to broader adoption of network access services as part of the same platform.

 

Conclusion

 

As you navigate the complexities of securing network access, Microsoft’s Security Service Edge solution helps you transform your security posture and improve user experience. It simplifies collaboration between identity and network security teams by consolidating access policies across identities, endpoints and network, all managed in a single portal - the Microsoft Entra admin center. Microsoft’s SSE solution provides a new pathway to implement zero trust access controls more effectively, enabling your organization to enhance its security posture while leveraging existing Microsoft investments.

 

To learn more about Omdia’s perspective on Microsoft’s SSE solution, read Omdia’s report, Microsoft announces general availability of its SASE/SSE offering.

 

Learn more and get started 

 

Stay tuned for more Security Service Edge blogs. For a deeper dive into Microsoft Entra Internet access and Microsoft Entra Private Access, watch our recent Tech Accelerator product deep dives.

 

To get started, contact a Microsoft sales representative, begin a trial, and explore Microsoft Entra Internet Access and Microsoft Entra Private Access general availability. Share your feedback to help us make this solution even better. 

 

Nupur Goyal, Director, Identity and Network Access Product Marketing 

 

 

Read more on this topic

Simplify your Zero Trust strategy with the Microsoft Entra Suite and unified security operations platform, now generally available  Microsoft’s Security Service Edge products now in General Availability  Microsoft Entra Internet Access Microsoft Entra Private Access

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community 

 


auth0

All You Need To Know About Passkeys at Auth0!

There are so many resources out there about passkeys and each vendor has its own implementation of the standard. Let’s answer some of your frequently asked questions about passkeys at Auth0!
There are so many resources out there about passkeys and each vendor has its own implementation of the standard. Let’s answer some of your frequently asked questions about passkeys at Auth0!

Indicio

Biometric digital identity travel and hospitality Prism report

Prism The post Biometric digital identity travel and hospitality Prism report appeared first on Indicio.

Ontology

Ontology Weekly Report: September 3rd — 9th, 2024

Ontology Weekly Report: September 3rd — 9th, 2024 Ontology At Ontology, we’re continuing to engage closely with our community, ensuring consistent communication and collaboration. Here’s what’s been happening: Community Call and Privacy Hour Our regular Community Call and Privacy Hour took place as planned, fostering open conversations on decentralized identity and privacy. If you missed
Ontology Weekly Report: September 3rd — 9th, 2024 Ontology

At Ontology, we’re continuing to engage closely with our community, ensuring consistent communication and collaboration. Here’s what’s been happening:

Community Call and Privacy Hour
Our regular Community Call and Privacy Hour took place as planned, fostering open conversations on decentralized identity and privacy. If you missed it, catch up with the recording here. ONTO Wallet New Node Registration Tutorial
Stay on top of your game! We’ve released a new video tutorial on how to register a node, making it easier than ever to get started. Joining the Exocore Ecosystem
ONTO Wallet is now a part of the Exocore ecosystem, reinforcing our commitment to providing top-tier decentralized solutions. Orange Protocol ENS on Base Campaign
We’re excited to celebrate ENS’s expansion to the Base chain, a major step toward bringing billions of people onchain! You can now mint and manage ENS subnames directly on Base with lower gas fees. In collaboration with the artist MEK, we’ve unveiled artwork capturing this milestone. This campaign boosts the integration of ENS as a digital identity in decentralized applications. Don’t miss out — join the campaign today! Community

Engagement is at the heart of what we do. This week, we kept the momentum going with interactive sessions and fun activities:

Wordle Game
We hosted our first-ever Wordle game during this week’s discussions, and it was a hit! Due to its success, it will now become a monthly feature. Special thanks to our hosts, SasenDish and Iamfurst, for their energy! Telegram Community Discussion
The Ontology French Telegram channel hosted a session on the history of crypto, focusing on the Mt. Gox collapse. Special thanks to Mathus95 for his valuable insights. Publications

Check out our latest articles for deep dives into critical Web3 issues:

Decentralized Identity and Reputation: Balancing Freedom and Regulation
Discover how decentralized identity systems can protect privacy while addressing the need for regulation. Real-world examples like Silk Road and Tornado Cash illustrate the challenges and solutions. Read more.
With transparency and engagement, we could create a system that balances freedom with responsibility.
Mark Cuban’s Challenge to Trump Supporters
This article highlights Mark Cuban’s comments and their relevance to the echo chambers in venture capital. Read here.
As we continue to develop Web3 technologies, let’s push for a world where investor reputations and venture capital histories are public, verifiable, and untouchable by spin.
Stay Connected

Stay engaged and informed by following us on our social media channels. Your participation is essential as we continue to build a more secure and inclusive digital world together.

Ontology Website / ONTO Website / OWallet (GitHub) Twitter / Reddit / Facebook / LinkedInYouTube Telegram Announcements / Telegram EnglishDiscord

Ontology Weekly Report: September 3rd — 9th, 2024 was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Protecting Cloud Environments at Scale

by Dominik Sowinski In today’s cloud-driven world, securing digital infrastructure is more challenging than ever. With advanced persistent threats (APTs) on the rise and global conflicts intensifying cyber risks, adapting cloud security strategies is essential. At cyberevolution 2024, Dominik Sowinski, Cybersecurity Architect at Siemens AG, will explore how organizations can fortify their cloud e

by Dominik Sowinski

In today’s cloud-driven world, securing digital infrastructure is more challenging than ever. With advanced persistent threats (APTs) on the rise and global conflicts intensifying cyber risks, adapting cloud security strategies is essential. At cyberevolution 2024, Dominik Sowinski, Cybersecurity Architect at Siemens AG, will explore how organizations can fortify their cloud environments against emerging threats.

Dominik’s talk will cover the latest attack trends and offer strategies for protecting cloud infrastructures at scale. He’ll delve into how AI, automation, and secure architecture can help mitigate risks, while highlighting best practices for building a resilient cloud security framework.

For professionals tasked with safeguarding their organization's cloud operations, this session is a must. Don’t miss out on the opportunity to stay ahead of evolving threats in today’s dynamic cybersecurity landscape.


Metadium

Explorer Update

Dear Community, We are excited to announce that the Metadium Explorer website has been updated. A new feature has been added to the Token Transfer menu, limiting data beyond the offset range. This will allow you to access data more reliably, improving the overall user experience. Metadium will continue to prioritize your convenience and security as we make ongoing improvements. Thank you. 안녕하세

Dear Community,

We are excited to announce that the Metadium Explorer website has been updated. A new feature has been added to the Token Transfer menu, limiting data beyond the offset range. This will allow you to access data more reliably, improving the overall user experience.

Metadium will continue to prioritize your convenience and security as we make ongoing improvements.

Thank you.

안녕하세요, 메타디움 커뮤니티 여러분!

최근 메타디움 익스플로러 웹사이트에 업데이트가 진행되었습니다. Token Transfer 메뉴에서 화면에서 표시되는 오프셋 데이터 범주를 넘어 데이터를 조회하는 것을 제한하는 기능이 추가되었습니다. 이로 인해 데이터를 더 안정적으로 확인하실 수 있으며, 더욱 원활한 사용자 경험이 가능해졌습니다.

앞으로도 메타디움은 여러분의 편의성과 보안을 최우선으로 생각하며 지속적인 개선을 이어나가겠습니다.

메타디움 커뮤니티 여러분의 지속적인 관심과 지원에 감사드리며, 앞으로도 많은 성원 부탁드립니다

감사합니다.

메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

Explorer Update was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What is Dynamic Access Control? Ties to Authorization

Benefits of dynamic access control and how it works, with a focus on its role in financial services and key features for improved access management

Introduced as part of Windows Server 2012, Dynamic Access Control (DAC) enables administrators to regulate network access based on a number of dynamic variables. For instance, dynamic access control can grant a user access to network resources while on a private internet connection, but restrict their access if they’re on a public wi-fi network. This makes dynamic access control well-suited to meeting the demands of modern access management. Financial service providers can use dynamic access control to enhance their data governance in a way that doesn’t interfere with the user experience.


BlueSky

Share video on Bluesky!

Bluesky now has video!

After much anticipation, you can now share videos on Bluesky! Let’s dive right into the quick facts.

Quick facts Each post can contain one video. Videos can be up to 60 seconds long. Bluesky currently supports .mp4, .mpeg, .webm, and .mov video files. By default, videos will auto-play. You can turn off auto-play in Settings.

Update to version 1.91 of the mobile app or refresh desktop to begin watching video on Bluesky. We're rolling out the ability to post video gradually to ensure a smooth experience.

Some more details You can attach subtitles to your video. Currently, you can upload 25 videos / 10 GB of video per day. We may tweak this limit.

At Bluesky, the product team works hand-in-hand with Trust & Safety to develop new features. Here’s the safety tooling available with video:

You must verify your email before you can upload a video. This is one step to decrease spam and abuse with video. You can apply labels to your own videos, for example, for adult content. You can submit reports to Bluesky’s moderation team for posts with video. These posts may be labeled or taken down. Video that contains illegal content will be purged from our infrastructure. For users that repeatedly violate our community guidelines with video content, Bluesky’s moderation team may remove your ability to upload videos. Every video is processed via Hive and Thorn to scan for content that requires a content warning or content that should be taken down (e.g. illegal material like CSAM). When you delete a post that contains video, the video will be deleted immediately. Shortly afterwards, the data will be entirely purged from Bluesky infrastructure as well.

Sports, pop culture, politics, breaking news, and so much more just got a lot more exciting on Bluesky! We’re so excited for our community to continue to grow. See you on Bluesky!

Tuesday, 10. September 2024

KuppingerCole

A Glimpse into the 2024 IGA Market Landscape

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governanc

The IGA market continues to grow, and although at a mature technical stage, it continues to evolve in the areas of intelligence and automation. Today, there still are some organizations either looking at replacements of UAP and ILM or IAG, but most are opting for a comprehensive IGA solution that simplifies deployment and operations and to tackle risks originating from inefficient access governance features. The level of identity and access intelligence has become a key differentiator between IGA product solutions. Automation is still the key trend in IGA to reduce management workload by automating tasks, providing recommendations, and improving operational efficiency.

Nitish Deshpande, Research Analyst at KuppingerCole, will discuss the current state of the IGA market, the core capabilities required by IGA solutions as well as the business activities supported by IGA solutions. He will describe our Leadership Compass methodology and process and show some high-level results from the report which was published last month.




Unlocking Success: Praxisorientiertes Rollenmanagement und Berechtigungskonzeptverwaltung im Fokus

IT-Fachleute stehen vor der Herausforderung, komplexe Rollenstrukturen und Berechtigungskonzepte effizient zu verwalten. Die Vielzahl von Einzelrechten und Rollenobjekten erschwert nicht nur die Erstellung, sondern auch die kontinuierliche Anpassung an sich wandelnde Anforderungen im Identitäts- und Zugriffsmanagement (IAM). Zudem müssen Compliance-Anforderungen erfüllt und Änderungen nachvollzieh

IT-Fachleute stehen vor der Herausforderung, komplexe Rollenstrukturen und Berechtigungskonzepte effizient zu verwalten. Die Vielzahl von Einzelrechten und Rollenobjekten erschwert nicht nur die Erstellung, sondern auch die kontinuierliche Anpassung an sich wandelnde Anforderungen im Identitäts- und Zugriffsmanagement (IAM). Zudem müssen Compliance-Anforderungen erfüllt und Änderungen nachvollziehbar dokumentiert werden. Mithilfe moderner Technologien wie zentralisierte Plattformen, Visual Analytics und Workflow-Engines, können die Herausforderungen des Rollenmanagements und der Berechtigungskonzeptverwaltung effektiv angegangen werden.

Schließen Sie sich den IAM-Experten von KuppingerCole Analysts und Nexis an, wie sie die Komplexität der Rollenstruktur, Compliance-Anforderungen und die Notwendigkeit der Nachvollziehbarkeit von Änderungen im IAM bedeutende Herausforderungen darstellen.

Matthias Reinwarth, der Director Practice IAM bei KuppingerCole Analysts, wird die steigende Notwendigkeit eines übergreifenden und wohladministrierten Rollenkonzeptes im Überblick betrachten. Außerdem wird er die besondere Notwendigkeit mit Blick auf die Erfüllungen rechtlicher und regulatorischer Anforderungen darlegen.

Alexander Puchta, Head of Professional Services bei der Nexis GmbH erklärt wie durch standardisierte Ansätze und Integrationen Kunden in die Lage versetzt werden, Best Practices umzusetzen und Compliance-Anforderungen zu erfüllen. Praxisbeispiele verdeutlichen die Anwendbarkeit dieser Lösungen.




Analyst's View: Passwordless Authentication for Enterprises

by Alejandro Leal Driven by the security risks and inconvenience associated with passwords, organizations are increasingly moving towards eliminating them altogether. Passwordless authentication solutions have emerged as a compelling alternative, offering enhanced security features and improved user convenience compared to traditional methods. Although passwordless options have been around for a w

by Alejandro Leal

Driven by the security risks and inconvenience associated with passwords, organizations are increasingly moving towards eliminating them altogether. Passwordless authentication solutions have emerged as a compelling alternative, offering enhanced security features and improved user convenience compared to traditional methods. Although passwordless options have been around for a while, some recent solutions are gaining traction with enterprises and even consumer-facing businesses.

1Kosmos BlockID

Navigating the Complexities of Modern Customer Identity Verification

In an era where identity theft and fraud are rampant, understanding the complexities of customer identity verification is crucial for businesses, especially in the financial sector. This involves meticulous Know Your Customer processes, safeguarding sensitive customer data, and adhering to global regulations to prevent fraudulent activities. Technological advancements such as AI, blockchain, and b

In an era where identity theft and fraud are rampant, understanding the complexities of customer identity verification is crucial for businesses, especially in the financial sector. This involves meticulous Know Your Customer processes, safeguarding sensitive customer data, and adhering to global regulations to prevent fraudulent activities. Technological advancements such as AI, blockchain, and biometrics have revolutionized these processes, ensuring they are more secure and user-friendly.

Understanding KYC (Know Your Customer)

Know Your Customer, commonly called KYC, is a pivotal component of customer identity verification. KYC is a process where businesses verify the identity of their clients and verify the identity documents of customers by ensuring that they are genuine and assessing the potential risks associated with maintaining a business relationship with them. Businesses, particularly in the financial sector, employ KYC procedures to comply with global regulations and prevent fraudulent activities such as money laundering and other identity fraud and theft.
The KYC process includes various stages, such as customer identification documents, customer due diligence, and ongoing monitoring of a customer’s age and transactions. It involves collecting, verifying, and maintaining detailed customer information, including personal details, contact information, and document verification. As a result, KYC helps in creating a secure business environment, fostering trust among clients and businesses.

Data Privacy and Protection

In customer identity verification, data privacy and protection of sensitive information are significant. Safeguarding customer data against unauthorized access and potential breaches is indispensable for maintaining customer trust and regulatory compliance. Businesses must establish robust data protection mechanisms that ensure customer data is stored, processed, and transmitted securely.
Data protection goes beyond the confines of technological safeguards. It encompasses legal and procedural measures, including consent management, data minimization, and adherence to global data protection regulations. In essence, protecting customer data is not merely a technical requirement but a comprehensive approach that integrates technology, legal compliance, and ethical considerations in handling a customer’s identity information.

Verification Process and User Experience

The verification process is a critical juncture where customer experience and security converge. An effective verification method requires customers to ensure the process is streamlined, user-friendly, and secure, balancing stringent security measures and a seamless user experience. Businesses must design intuitive online verification processes, minimizing customer effort and reducing the abandonment rate.
An optimized customer verification process incorporates multiple verification methods, such as document verification, biometric authentication, and two-factor authentication, to ensure compliance and enhance security. Furthermore, it’s imperative to ensure that the customer verification process is agile, adapting to evolving customer needs and emerging security threats. Thus, fostering a verification process that encapsulates user-centricity and security is instrumental in enhancing customer satisfaction and trust.

How Do You Verify Customer Identity? Utilizing AI and ML in Verification

Artificial Intelligence (AI) and Machine Learning (ML) are transformative technologies reshaping the landscape of customer identity verification. AI and ML algorithms can analyze vast datasets, identify patterns, and facilitate real-time decision-making in the identity verification process. These technologies enable automated document verification, both facial recognition and voice recognition, and anomaly detection, enhancing the accuracy and efficiency of identity verification.
By harnessing the power of AI and ML, businesses and financial institutions can automate repetitive tasks, reduce human error, and expedite the verification process of credit information. It allows for the continuous improvement of verification procedures as the algorithms learn and adapt to new patterns and threats, ensuring the verification process remains robust against evolving fraudulent tactics.

Blockchain for Secure Data Storage

Blockchain technology is emerging as a formidable force in securing customer data and enhancing the integrity of identity verification processes. Blockchain allows for the creation of decentralized and immutable ledgers where customer data can be stored securely, mitigating the risks associated with centralized data storage, such as data breaches and unauthorized access.
In a blockchain-based identity verification system, a customer’s identity data is encrypted and stored decentralized, ensuring it is resilient against tampering and unauthorized access. This technology fosters enhanced data integrity and trust, as customers can exercise greater control over their data, and businesses can ensure that the data utilized in the verification and authentication process to verify customers is accurate and unaltered.

Biometrics and Advanced Verification Methods

Biometrics have cemented their place as a cornerstone in advanced identity verification methods. Biometric verification encompasses various modalities of ID verification, such as fingerprint recognition, facial recognition, and voice authentication. These methods leverage individuals’ unique biological and physical characteristics, providing high security and accuracy in identity and verification services.
Employing biometrics in the verification process enhances the user experience by enabling quick and effortless verification of false online identities. Moreover, it bolsters security by ensuring the verified person’s identity corresponds to a live individual, mitigating the risks associated with identity theft and spoofing stolen identities. As biometric technology continues to evolve, it is poised to play an increasingly pivotal role in shaping secure and user-friendly identity verification processes.

Legal and Compliance Aspects
Global Regulatory Framework

Navigating the global regulatory landscape is indispensable in customer identity verification. International regulations and guidelines govern the processes and protocols for verifying customer identities. These regulatory frameworks aim to safeguard customer data, prevent fraudulent activities, and promote a secure digital ecosystem. Adhering to these regulations is paramount for businesses to maintain operational legitimacy and foster customer trust.
These global regulations often mandate stringent KYC (Know Your Customer) verification procedures, Anti-Money Laundering (AML) policies, and robust data protection measures. They necessitate continuous compliance, necessitating businesses to stay abreast of regulatory updates and dynamically align their verification processes to meet evolving compliance standards.

GDPR, CCPA, and Other Data Protection Laws

Prominent data protection regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are pivotal in shaping customer identity verification processes. These regulations advocate for stringent data protection measures, consent management, and enhanced user control over personal data. Compliance with these laws is imperative to safeguard user data and uphold organizational credibility and brand reputation.
These regulations entail specific provisions regarding collecting, storing, and processing personal data during customer verification. They advocate for data minimization, purpose limitation, and enhanced security measures to prevent unauthorized access and breaches of personal identification. Therefore, understanding and incorporating these legal provisions are crucial for businesses to foster lawful and secure identity verification processes.

Challenges and Solutions in Customer Identity Verification
Balancing Security and User-Friendliness

Creating a verification process that is both secure and user-friendly is a challenge. A robust verification process must ensure that security is maintained, but it should also avoid creating cumbersome processes that may deter users. Simplifying and streamlining the verification process while maintaining high-security standards is crucial for enhancing user satisfaction and trust.
Employing intuitive user interfaces, minimizing the number of required user actions, and using knowledge-based authentication utilizing technologies like biometrics can aid in achieving this balance. Adaptive authentication, which adjusts the level of required verification based on the associated risk, is another approach that can optimize the user experience without compromising security.

Dealing with Fraud and Identity Theft

Fraud and identity theft remain pervasive threats in today’s digital age domain. Crafting verification processes that can robustly counteract these threats is crucial. Techniques such as employing multi-factor authentication, machine learning to detect unnatural patterns, fraud prevention, and continuously updating security protocols can enhance resilience against these challenges.
Cultivating user awareness about potential threats and safe practices is vital. Education and clear communication can empower users to act as a robust first line of defense, recognizing and averting potential security threats before they manifest into breaches.

Future-Proofing Verification Processes

Ensuring that verification processes remain relevant and effective in evolving technological landscapes and emerging threats is essential. Future-proofing involves cultivating a flexible and adaptive verification strategy that swiftly incorporates new technologies, addresses emerging threats, and meets changing regulatory requirements.
Continuous learning, proactive adaptation of new technologies, and fostering a security-centric organizational culture are critical facets of future-proofing verification processes. It involves technological adaptability and strategic foresight to anticipate future trends and challenges, ensuring sustained relevance and effectiveness.

Automate Your Customer Verification Process with 1Kosmos

1Kosmos integrates with the pivotal aspects of customer identity verification, modernizing and securing the customer onboarding process. It revolutionizes KYC (Know Your Customer) by offering self-service identity verification, ensuring customers are authenticated with over 99% accuracy.
1Kosmos ensures a robust and unbiased verification process by utilizing live facial biometrics matched with government-issued credentials. Moreover, it empowers customers with a digital wallet, allowing them to securely transact and share Personally Identifiable Information (PII), enhancing user experience and trust.
Our platform’s emphasis on privacy by design aligns with the global emphasis on data protection. It puts users in complete control of their PII, ensuring enhanced security and compliance with regulations such as GDPR and CCPA.
1Kosmos’ innovative approach, combining biometrics and blockchain technology, enhances the security and efficiency of the customer identity verification process and fosters a user-centric approach, balancing stringent security measures with a seamless user experience.
Beyond refining customer identity verification, 1Kosmos also incorporates added security features like:
1. Biometric-based Authentication: We push biometrics and authentication into a new “who you are” paradigm. 1Kosmos uses biometrics to identify individuals, not devices, through credential triangulation and identity verification.
2. Identity Proofing: 1Kosmos provides tamper evident and trustworthy digital verification of identity – anywhere, anytime and on any device with over 99% accuracy.
3. Privacy by Design: Embedding privacy into the design of our ecosystem is a core principle of 1Kosmos. We protect personally identifiable information in a distributed identity architecture, and the encrypted data is only accessible by the user.
4. Distributed Ledger: 1Kosmos protects personally identifiable information in a private and permissioned blockchain, encrypts digital identities, and is only accessible by the user. The distributed properties ensure no databases to breach or honeypots for hackers to target.
5. Interoperability: 1Kosmos can readily integrate with existing infrastructure through its 50+ out-of-the-box integrations or via API/SDK.
6. Industry Certifications: Certified-to and exceeds requirements of NIST 800-63-3, FIDO2, UK DIATF and iBeta Pad-2 specifications.

To learn more about the 1Kosmos solution, visit the platform capabilities and feature comparison pages of our website.

The post Navigating the Complexities of Modern Customer Identity Verification appeared first on 1Kosmos.


KuppingerCole

KuppingerCole Cybersecurity Council Reflects on the CrowdStrike Incident: Lessons and Future Directions

by Berthold Kerl On September 4, 2024, KuppingerCole’s Cybersecurity Council convened for its third meeting of the year. This council, composed of Chief Information Security Officers (CISOs) from some of Europe’s largest organizations, provides a platform for discussing pressing cybersecurity challenges. This session focused on the July 2024 CrowdStrike incident, which caused widespread disruptio

by Berthold Kerl

On September 4, 2024, KuppingerCole’s Cybersecurity Council convened for its third meeting of the year. This council, composed of Chief Information Security Officers (CISOs) from some of Europe’s largest organizations, provides a platform for discussing pressing cybersecurity challenges. This session focused on the July 2024 CrowdStrike incident, which caused widespread disruption to Windows systems globally, and provided members the opportunity to share their lessons learned and proposed future actions.

The incident, caused by a faulty kernel-level driver, resulted in the crash of around 8 million machines worldwide, particularly affecting systems using BitLocker encryption. John Tolbert, KuppingerCole’s lead analyst, opened the discussion with an analysis of the event, pointing out that insufficient pre-deployment testing and the absence of a phased rollout were key factors in the incident’s scale. Tolbert also presented findings from his recent research into Endpoint Protection, Detection, and Response (EPDR) tools, highlighting the growing complexity and risk that accompanies widespread reliance on these solutions.

The attending CISOs, representing a variety of industries from banking to energy and retail, provided invaluable feedback on how their organizations dealt with the fallout from the CrowdStrike incident. Their experiences offered a wide range of perspectives: from those who directly used CrowdStrike to those impacted by the vulnerabilities of suppliers who relied on it. A key theme that emerged was the importance of improving testing procedures, ensuring stronger controls over software updates, and reinforcing supply chain security practices.

Across the board, CISOs emphasized the importance of Business Continuity Management (BCM). One organization reported that despite having thousands of systems down, their BCM efforts ensured a rapid recovery, with 95% of systems restored within 48 hours. Others, however, encountered significant operational downtime, particularly in sectors reliant on point-of-sale systems. For these organizations, recovery was hampered by complex dependencies on both internal and third-party systems.

Another key insight revolved around insurance and liability issues. CISOs debated the challenges of pursuing insurance claims in incidents where the root cause stems from software vendors rather than cyberattacks. Many organizations are now considering adding technical insurance to their cyber policies, as existing coverages did not account for software-induced outages.

One of the more nuanced discussions concerned the merits of multi-vendor EPDR strategies. While employing multiple security tools may reduce dependence on a single vendor, the increased complexity of managing and integrating different solutions often brings its own risks. Several members expressed concern over this approach, with one noting that a multi-EPDR strategy could cause operational inefficiencies that outweigh the potential benefits.

The session concluded with a focus on key takeaways:

Better Testing and Controlled Rollouts: Vendors must implement more stringent testing protocols and provide customers with better control over update timings to avoid global disruptions. Supply Chain Security: Organizations need to reassess their vendor management strategies, ensuring that service-level agreements (SLAs) clearly define responsibilities during incidents. Incident Communication: Timely and transparent communication with internal teams and external partners is critical in managing the fallout from large-scale incidents like CrowdStrike’s.

The KuppingerCole Cybersecurity Council continues to serve as an essential forum for CISOs to exchange insights and best practices. The next in-person meeting will take place during the cyberevolution 2024 conference, scheduled for December 3-5 in Frankfurt, where members will further explore cutting-edge cybersecurity strategies and enjoy networking opportunities.

This lively session offered valuable insights for council members and showcased the ongoing relevance of collaborative efforts in the cybersecurity space. Through these discussions, the council can drive industry-wide improvements in how security incidents are managed, both for member organizations and the broader public.

Next Meeting: December 3-5, 2024, cyberevolution, Frankfurt.


Indicio

From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management

The post From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management appeared first on Indicio.

By: Helen Garneau

In today’s digital world, identity is at the core of how individuals interact with online services. From accessing email to making online purchases, proving who you are is fundamental.

There are two methods for managing online identities, federated identity and decentralized identity —one legacy, one new — and each takes a different approach to where personal data is stored in order to authenticate an identity. Federated identity, which has dominated identity management for years, relies on centralized data management: personal data is stored in a database and checked against a login and password from a user account, whereas decentralized allows people, organizations, and things to hold their own personal data, and the source and integrity of this data is cryptographically authenticated for identity verification.

We’ll explain this in more detail in a moment, but this distinction — centralized vs decentralized — has profound implications for data privacy and security, and user experience.

Federated Identity: A Step Beyond Centralized Identity

Federated identity systems improve upon traditional centralized digital identity by allowing a single sign-on (SSO) across multiple platforms. Instead of creating separate accounts for each service, users can log in once using a trusted identity provider (IdP) like Google, Facebook, or Microsoft, and access various services. This system offers convenience for both users and service providers, reducing the friction of managing multiple identities.

Federated identity providers get their information directly from users during account creation or from external sources like social media, public records, and other databases. In many cases, businesses rely on these providers to authenticate users, paying for verification services or receiving data in exchange for marketing insights. While this model offers convenience, it has significant drawbacks.

The Drawbacks of Federated Identity

Centralized Control: Even though federated identity reduces the need for multiple login credentials, it still relies on centralized identity providers. These providers act as gatekeepers to online services, standing in the way of an end user and the service they are accessing. This creates a system where a few large enterprises control a vast number of digital interactions. Lack of Privacy: Federated identity providers typically gather extensive amounts of user data, which is then monetized. Users may not be aware of how much data is being shared across services or sold to third parties, leading to privacy concerns. As more services link to federated identities, the amount of shared data can grow exponentially. Single Points of Failure: The reliance on one or two major identity providers can also introduce risk. If a federated identity provider goes offline, or if an account is locked or hacked, users lose access to all associated services. This concentration of control makes federated systems prone to major disruptions when something goes wrong. Data Breaches: Federated systems, though more distributed than centralized identity models, still centralize sensitive data within the hands of a few large corporations. As history has shown, these providers are frequent targets for hackers, making them vulnerable to large-scale breaches that compromise millions of users at once.

Decentralized Identity: A User-Centric Solution with Verifiable Credentials

Decentralized identity, flips the traditional centralized model on its head. Instead of relying on centralized authorities to manage identity collected from third-parties, decentralized identity systems give individuals control over their own data.

How does this work? It’s a two-step process. First, a global standard from the World Wide Web Consortium (W3C) allows people and organizations to create decentralized identifiers (DIDs), which they can cryptographically prove they control. Then, using these DIDs, they can add digital credentials that contain relevant identity information—like a government ID, bank account, or passport which make it easy to present their information digitally to be verified by other entities, independently, without intervention from federated systems.

Verifiable Credentials are a special type of digital credential that offer a powerful and efficient way to issue, share, and verify important data. What sets them apart is that the data is digitally signed by the trusted issuer, ensuring its origin and authenticity can be instantly verified using simple software—without needing logins, passwords, or checking against a database. Since you hold your own data, you can choose when to share it, solving a key issue in data privacy regulation: lack of consent. Plus, some Verifiable Credentials let you selectively share only the necessary information or use privacy-preserving features. And if anyone tries to alter the credential after it’s issued, the change is easy to spot during verification.

The combination of DIDs and Verifiable Credentials means that you can always be certain of the source of a credential and that the data in the credential hasn’t been altered.

The Advantages of Decentralized Identity with Verifiable Credentials

User Control and Privacy: In a decentralized identity system, individuals have full control over their credentials. They decide which pieces of information to share and with whom. This is in contrast to federated identity, where large identity providers mediate these transactions. Decentralized identity systems enable self-sovereign identity (SSI), meaning users have complete autonomy over their personal data. Improved Privacy through Selective Disclosure: Verifiable Credentials allow for selective disclosure, where users can prove certain facts (like being over 18) without revealing unnecessary information (like a full birthdate). This significantly enhances privacy and minimizes the sharing of personal data compared to federated identity systems, where often more information than necessary is shared across services. No Single Point of Failure: Unlike federated identity, decentralized identity doesn’t rely on any single provider. This dramatically reduces the risk of losing access to services in the event of an account compromise or a provider outage. The use of distributed ledger technology means there is no central database that can be breached, making decentralized identity systems inherently more secure. Persistent Identity: When a credential issuer writes the metadata for a credential to be read to a distributed ledger, the actual identity it supports cannot be taken away. The immutability of data written to a distributed ledger means that a Verifiable Credential can always be verified. Important to note — only metadata for the credential, the data to perform cryptography, is written to the ledger. No personal data goes on the ledger. Added Security: When you don’t have to store personal data on a database to manage identity authentication and access, it can’t be stolen. It’s as simple as that. Another huge benefit — you can access accounts or systems without having to use passwords. And if you want the ultimate in security, you can issue biometrics as Verifiable Credentials. This means that when a person performs a biometric scan, they simultaneously present a biometric template in a Verifiable Credential, and the scan is compared with the template. This effectively binds biometric data to a person and can be used to prevent generative AI deepfakery. Efficiency and Convenience: While federated identity simplifies login processes by allowing users to access multiple services with one account, decentralized identity goes even further. Once verifiable credentials are issued, they can be reused across different services without having to rely on a third-party identity provider for each transaction. This speeds up verification processes and reduces reliance on external parties.

Why Decentralized Identity and VCs Are the Future

Decentralized identity, powered by verifiable credentials, represents a paradigm shift in how we manage identity online. By addressing the security, privacy, and efficiency challenges inherent in centralized and federated systems, decentralized identity offers a more robust solution that traditional identity systems cannot match. By eliminating the need for centralized identity providers and reducing the risk of data breaches, decentralized identity systems offer a more secure and private way to manage digital identities. Moreover, they deliver a more seamless and user-friendly experience by enabling users to reuse credentials across services without intermediaries.

In an increasingly interconnected world, decentralized identity and VCs pave the way for a more secure, private, and user-centric digital future.

Visit Indicio for more information on decentralized identity and verifiable credentials. Or contact us to find out how your organization can boost your digital identity programme.

###

Suggested reading:

Beginners guide

What are Verifiable Credentials? (With Pictures!)

What is DIDComm? (With Pictures!)

How verifiable credentials disrupt online fraud, phishing, and identity theft

 

 

 

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post From federated to decentralized identity: Why Verifiable Credentials are the next step in identity management appeared first on Indicio.


Ocean Protocol

Predictoor Benchmarking: 180-Day Profitability of Linear Classifiers

Benchmarking seven different linear classifier models to determine the best one for Predictoor & trader profits Summary This benchmarking blog post tries to answer the question, “Which linear classifier model makes the most $”? So we benchmarked all seven linear classifier Predictoor models over 180 days (50k 5min candle iterations), to show absolute value profitability. This time frame
Benchmarking seven different linear classifier models to determine the best one for Predictoor & trader profits Summary

This benchmarking blog post tries to answer the question, “Which linear classifier model makes the most $”? So we benchmarked all seven linear classifier Predictoor models over 180 days (50k 5min candle iterations), to show absolute value profitability. This time frame is 10x longer than that of previous blog post benchmarks, which were only 18 days (5k iterations). Therefore, a 180-day time frame better shows an absolute value profit which helps determine the best models for Predictoor & trader bots.

Predictoor Profit vs Time for the most successful model, ClassifLinearRidge with None calibration

Over the 180-day term, Predictoor profit fluctuates where some 18-day periods make $, lose $, or remain relatively flat. This is demonstrated in the plot above of Predictoor Profit vs Time for the ClassifLinearRidge model with None calibration, the most successful model for Predictoor profit.

That’s why it’s important to benchmark over longer time frames (180 days+) rather than just short time frames if we want to understand absolute value profitability. Nonetheless, short 18-day benchmarks are useful to compare relative performance of one model vs another (but one cannot draw definitive conclusions about profitability).

Trader Profit vs Time for ClassifLinearRidge with None calibration

A plot of Trader Profit vs Time for the same model also shows how 18-day periods within a 180-day time frame, will either make $, lose $, or remain flat.

This blog post benchmarks Ocean Predictoor simulations for all the Predictoor linear classifier models: ClassifLinearLasso, ClassifLinearLasso_Balanced, ClassifLinearRidge, ClassifLinearRidge_Balanced, ClassifLinearElasticNet, ClassifLinearElasticNet_Balanced, and ClassifLinearSVM. Each implementation is compared with three different calibrations.

This blog post then proceeds to do a walk-through of each of the benchmark plots for predictoor/trader profit, and comparisons of the models & their calibrations.

1. Introduction 1.1 What is Ocean Predictoor?

For information about Ocean Predictoor, please refer to the Predictoor Series blog post that catalogs all the blog posts, articles, and talks related to Predictoor. Learn about ML concepts such as classification, L1 & L2 regularization, calibration, and Predictoor’s simulation tool (“pdr sim”) and (“pdr multisim”) in the Regularized Linear Classifiers With Calibration blog post. Learn about ML balancing in the blog post, The Effects of Balancing on Calibrated Linear Classifiers.

1.2 Benchmarks Outline

We run benchmarks on the approaches:

ClassifLinearLasso & ClassifLinearLasso_Balanced — L1 Regularization. ClassifLinearRidge & ClassifLinearRidge_Balanced— L2 Regularization. ClassifLinearElasticNet & ClassifLinearElasticNet_Balanced — L1 & L2 Regularization. ClassifLinearSVM— L2 Regularization.

The models are benchmarked with the same three calibration approaches, None, Isotonic, and Sigmoid, as in the Linear SVM Classifier with Calibration blog post.

1.3 Experimental Setup

The models were trained on BTC-USDT & ETH-USDT data from Jan 1, 2024 to July 15, 2024. All other experimental parameters, defined in the my_ppss.yaml file, are the same as in the previous blog post, The Effects of Balancing on Calibrated Linear Classifiers.

2. 180-Day Profits of ClassifLinearLasso Balanced & Unbalanced

The ClassifLinearLasso & ClassifLinearLasso_Balanced models are implemented with scikit-learn’s LogisticRegression() function with parameters for a linear kernel trick & L1 penalty.

2.1 ClassifLinearLasso (Unbalanced) 2.1.1 Predictoor Profit

Max Predictoor Profit: 26,545.88 OCEAN

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 2.1.2 Trader Profit

Max Trader Profit: $432.36 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 2 2.1.3 Analysis

The ClassifLinearLasso model made moderate Predictoor and trader profits in the 180 days. However, the model maximized Predictoor profit better than trader profit — it generated the third best Predictoor profit of all the benchmarks.

2.2 ClassifLinearLasso_Balanced 2.2.1 Predictoor Profit

Max Predictoor Profit: 6,915.77 OCEAN

Calibration: Sigmoid Max_n_train: 1000 Autoregressive_n: 2 2.2.2 Trader Profit

Max Trader Profit: $639.00 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 2.2.3 Analysis

While not excelling in Predictoor profits, ClassifLinearLasso_Balanced shows that balancing can improve trader profit returns. This model could be useful where stability and moderate trader returns are desired.

3. 180-Day Profits of ClassifLinearRidge Balanced & Unbalanced

The ClassifLinearRidge & ClassifLinearRidge_Balanced models are implemented with scikit-learn’s LogisticRegression() function with parameters for a linear kernel trick & L2 penalty.

3.1 ClassifLinearRidge (Unbalanced) 3.1.1 Predictoor Profit

Max Predictoor Profit: 41,790.77 OCEAN

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 3.1.2 Trader Profit

Max Trader Profit: $619.43 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 3.1.3 Analysis

The ClassifLinearRidge model produced the highest Predictoor profit of all the linear classifier model benchmarks. Therefore, it is a good candidate for running with a Predictoor bot over 180 days. It also generated moderate trader profits with Isotonic calibration.

3.2 ClassifLinearRidge_Balanced 3.2.1 Predictoor Profit

Max Predictoor Profit: 12,811.63 OCEAN

Calibration: Sigmoid Max_n_train: 5000 Autoregressive_n: 2 3.2.2 Trader Profit

Max Trader Profit: $897.49 USD

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 3.2.3 Analysis

The ClassifLinearRidge_Balanced model demonstrates strong trader profitability as balancing appears to have boosted trader profits compared to the ClassifLinearRidge model. Interestingly, the model did not need calibration to achieve its large trader profit, whereas previous benchmarks show Isotonic calibration best maximized trader profits.

4. 180-Day Profits of ClassifLinearElasticNet Balanced & Unbalanced

The ClassifLinearElasticNet & ClassifLinearElasticNet_Balanced models are implemented with scikit-learn’s LogisticRegression() function with parameters for a linear kernel trick, L1 & L2 penalties.

4.1 ClassifLinearElasticNet (Unbalanced) 4.1.1 Predictoor Profit

Max Predictoor Profit: 39,109.21 OCEAN

Calibration: None Max_n_train: 2000 Autoregressive_n: 2 4.1.2 Trader Profit

Max Trader Profit: $551.87 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 4.1.3 Analysis

The ClassifLinearElasticNet model generated the second highest Predictoor profit of the benchmarked models, second only to the ClassifLinearRidge model. Thus, L2 regularization appears to have made both models more accurate than the rest.

4.2 ClassifLinearElasticNet_Balanced 4.2.1 Predictoor Profit

Max Predictoor Profit: 12,709.23 OCEAN

Calibration: Sigmoid Max_n_train: 2000 Autoregressive_n: 2 4.2.2 Trader Profit

Max Trader Profit: $1,172.21 USD

Calibration: None Max_n_train: 2000 Autoregressive_n: 1 4.2.3 Analysis

The ClassifLinearElasticNet_Balanced model achieved the highest trader profit among all benchmarks. As in the other benchmarks, balancing appears to have boosted trader profits. None calibration produced the best trader profit, suggesting that adding classifier calibration to balancing may cause overfitting.

5. 180-Day Profits of ClassifLinearSVM

The ClassifLinearSVM model is implemented with scikit-learn’s LinearSVC() function with parameters for a linear kernel trick and regularization C value of 0.025 (the strength of the regularization is inversely proportional to C).

5.1 Predictoor Profit

Max Predictoor Profit: -162,610.90 OCEAN

Calibration: Sigmoid Max_n_train: 1000 Autoregressive_n: 2 5.2 Trader Profit

Max Trader Profit: $520.30 USD

Calibration: Isotonic Max_n_train: 1000 Autoregressive_n: 1 5.3 Analysis

The ClassifLinearSVM model generated significant losses in Predictoor profit, so it is not recommended for use with a Predictoor bot for 180 days. However, it is possible that tuning the model’s regularization parameter could improve profitability. The model managed to generate moderate trader returns with Isotonic calibration.

6. Analysis and Summary

Which linear classifier model makes the most $?

6.1 Predictoor Profit Analysis

The best Predictoor profit was gained by ClassifLinearRidge model. It gained 41,790.77 OCEAN over the 180 day term with None calibration, max_n_train = 2000, and autoregressive_n = 2. The next best model for Predictoor profitability was ClassifLinearElasticNet. Benchmarks for ClassifLinearSVM model were very poor, losing more than 162k OCEAN during the 180 days. Thus, it should not be used for a Predictoor bot for 180 day terms.

6.2 Trader Profit Analysis

The best trader profit was gained by ClassifLinearElasticNet_Balanced model. It profited $1,172.21 USD with None calibration, max_n_train = 2000, and autoregressive_n = 1. The next best model for trader profitability was ClassifLinearRidge_Balanced.

6.3 Benchmark Trends

Benchmarks show that balancing improved trader profits, especially when paired with L2 regularization, but balancing also reduced Predictoor profits. All the L2 regularized logistic regression models performed best both in Predictoor & trader profit.

6.4 Benchmark Summary

Here’s the breakdown of the best absolute value profitabilities for all seven linear classifier Predictoor models.

ClassifLinearLasso
Max Predictoor profit: 26545.88 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $432.36 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 2

ClassifLinearLasso_Balanced
Max Predictoor profit: 6915.77 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2
Max trader profit: $639.00 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

ClassifLinearRidge
Max Predictoor profit: 41790.77 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $619.43 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

ClassifLinearRidge_Balanced
Max Predictoor profit: 12811.63 OCEAN, calibration = Sigmoid, max_n_train = 5000, autoregressive_n = 2
Max trader profit: $897.49 USD, calibration = None, max_n_train = 2000, autoregressive_n = 2

ClassifLinearElasticNet
Max Predictoor profit: 39109.21 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $551.87 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

ClassifLinearElasticNet_Balanced
Max Predictoor profit: 12709.23 OCEAN, calibration = Sigmoid, max_n_train = 2000, autoregressive_n = 2
Max trader profit: $1172.21 USD, calibration = None, max_n_train = 2000, autoregressive_n = 1

ClassifLinearSVM
Max Predictoor profit: -162610.90 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2
Max trader profit: $520.30 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

7. Conclusion

We benchmarked the absolute value profitability of seven Predictoor linear classifier models over 180 days. The best model for maximizing Predictoor profit was ClassifLinearRidge. It gained 41,790.77 OCEAN over the 180 day term with None calibration, max_n_train = 2000, and autoregressive_n = 2 tunings. The next best model for Predictoor profitability was ClassifLinearElasticNet. The benchmarks also found that the ClassifLinearSVM model was highly negative in Predictoor profitability, losing more than 162k OCEAN over the time frame.

The best model for maximizing trader profit was ClassifLinearElasticNet_Balanced. It profited $1,172.21 USD with None calibration, max_n_train = 2000, and autoregressive_n = 1 tunings. The next best model for trader profitability was ClassifLinearRidge_Balanced.

Throughout the benchmarks, balancing appeared to improve trader profits, especially when paired with L2 regularization, but simultaneously reduced Predictoor profits. Given that the top 2 Predictoor profit models were ClassifLinearRidge & ClassifLinearElasticNet and the top 2 trader profit models were ClassifLinearElasticNet_Balanced & ClassifLinearRidge_Balanced, it appears that L2 regularization of the linear logistic regression models helped to generate the best profits.

8. Appendix: Tables 8.1 ClassifLinearLasso

Max Predictoor profit: 26545.88 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $432.36 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 2

8.2 ClassifLinearLasso_Balanced

Max Predictoor profit: 6915.77 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2

Max trader profit: $639.00 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

8.3 ClassifLinearRidge

Max Predictoor profit: 41790.77 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $619.43 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

8.4 ClassifLinearRidge_Balanced

Max Predictoor profit: 12811.63 OCEAN, calibration = Sigmoid, max_n_train = 5000, autoregressive_n = 2

Max trader profit: $897.49 USD, calibration = None, max_n_train = 2000, autoregressive_n = 2

8.5 ClassifLinearElasticNet

Max Predictoor profit: 39109.21 OCEAN, calibration = None, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $551.87 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

8.6 ClassifLinearElasticNet_Balanced

Max Predictoor profit: 12709.23 OCEAN, calibration = Sigmoid, max_n_train = 2000, autoregressive_n = 2

Max trader profit: $1172.21 USD, calibration = None, max_n_train = 2000, autoregressive_n = 1

8.7 ClassifLinearSVM

Max Predictoor profit: -162610.90 OCEAN, calibration = Sigmoid, max_n_train = 1000, autoregressive_n = 2

Max trader profit: $520.30 USD, calibration = Isotonic, max_n_train = 1000, autoregressive_n = 1

Predictoor Benchmarking: 180-Day Profitability of Linear Classifiers was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Lockstep

It’s safe to assume AIs can at least read. Isn’t it?

What do you think Large Language Models do? It’s easy to think LLMs think. Anthropomorphism is literally a force of nature. Human beings have evolved with a “Theory of Mind” to help us act more effectively with other conscious beings (I think there might be a better term somewhere for “Theory of Mind”; after all,... The post It’s safe to assume AIs can at least read. Isn’t it? appeared first on

What do you think Large Language Models do?

It’s easy to think LLMs think. Anthropomorphism is literally a force of nature. Human beings have evolved with a “Theory of Mind” to help us act more effectively with other conscious beings (I think there might be a better term somewhere for “Theory of Mind”; after all, it’s more a cognitive faculty than a “theory”).

It’s a powerful instinct. And, like other instincts that evolved for a simpler life on the savannah, Theory of Mind can tend to over-do things. It can lead us to intuit, falsely, that all sorts of things are alive (anyone remember the Pet Rock craze?) It seems Theory of Mind leads to “psychological illusions” just as our pre-wired visual cortex leads to optical illusions when we hit it with unnatural inputs. And so some people go so far as to feel that LLMs are sentient.

But most of us are probably wise to the impression that AIs give of being life-like.

So, what do LLMs really do?

Surely it’s safe to presume that a Large Language Model can at least read? I mean, their very name suggests that LLMs have some kind of grasp of language. Any fool can see they ingest text, interpret it and describe what it means. So that means they’re reading, right?

Well, no, AIs don’t even do that.

Check out this short explainer by the wonderful @albertatech on Instagram, of a howler made by all LLMs when asked “How many Rs are in the word strawberry?”.

Peoples’ mental models of AI are hugely important. The truth is that AIs lack anything even close to self-awareness. They cannot reflect on the things they generate and why. They have no inner voice that applies common sense to filter right and wrong, much less a conscience to sort good and bad. This makes AIs truly alien creatures, despite their best impressions.

Their failure modes are not even random (with apologies to Wolfgang Pauli). Society has no institutional mechanisms to deal with AIs’ deeply weird failures and yet we’re letting them drive on our public roads.

We casually talk about AIs “reading” and “writing”. We see them “seeing”; we interpret their outputs as “interpretations”.

These are all metaphors, and they’re wildly misleading.

The post It’s safe to assume AIs can at least read. Isn’t it? appeared first on Lockstep.


KuppingerCole

Cloud Security - Problem Solved? No!

by Osman Celik Cloud computing is an essential tool for organizations of all sizes, from small businesses to large enterprises. However, as cloud adoption continues to accelerate, securing cloud environments has always remained a major challenge. Today, organizations still face significant difficulties in protecting their data and resources in the cloud. One of the main reasons is the complexity

by Osman Celik

Cloud computing is an essential tool for organizations of all sizes, from small businesses to large enterprises. However, as cloud adoption continues to accelerate, securing cloud environments has always remained a major challenge. Today, organizations still face significant difficulties in protecting their data and resources in the cloud. One of the main reasons is the complexity of cloud environments and the shared responsibility model, which distributes security duties between the cloud provider and the user. Many organizations still struggle to understand where their cloud security responsibilities begin and end. The lack of clarity continues to leave cloud environments exposed to a wide range of vulnerabilities.

Organizations that operate in highly regulated industries, such as healthcare, finance, and government, are particularly vulnerable to cloud security challenges. These sectors deal with large amounts of sensitive data, such as personal information, financial records, and healthcare data. This makes them the prime targets for cybercriminals. Additionally, these industries face strict regulatory requirements that further complicate their cloud adoption. While larger organizations may have the resources to invest in advanced tools and hire experts, some small and medium-sized enterprises (SMEs) face challenges in implementing necessary security measures due to limited resources.

Cloud Security Challenges in 2024

In 2024, challenges like data breaches, misconfigurations, insider threats, regulatory compliance issues, third-party risks, and insufficient identity and access management (IAM) continue to be the top cloud security concerns for organizations. Data breaches remain one of the most significant risks because of the high volume of sensitive data stored in the cloud. Attackers can easily exploit weak security measures and vulnerabilities to gain unauthorized access to confidential data. Misconfigurations, such as exposing databases to the public without proper encryption, are also common and frequently result in massive data leaks.

The complexity of cloud environments contributes to the human factor, which in turn leads to insider threats, as employees may overlook some of the critical security measures. Whether intentional or accidental, insiders can cause severe damage by accessing sensitive data, misusing credentials, or exposing systems to cybercriminals. Regulatory challenges add another layer of complexity, as organizations must comply with regional and/or global compliance requirements, such as the General Data Protection Regulation (GDPR), Payment Card Industry Data Security Standard (PCI-DSS), or the Health Insurance Portability and Accountability Act (HIPAA). Ensuring regulatory compliance in cloud environments can be resource intensive and expensive. As many organizations depend on external vendors and cloud service providers to handle critical parts of their infrastructure, they are also often exposed to third-party risk. When one of these third parties is compromised, it can lead to security incidents across the entire ecosystem.

Lack of adequate IAM practices increases the risk of security breaches in cloud environments, given the role of managing user access to the resources. Weak IAM policies lead to unauthorized access and allow attackers to exploit accounts and passwords. Lack of multi-factor authentication (MFA) also poses a risk of intrusions into cloud systems. These IAM-related vulnerabilities highlight the need for organizations to enforce strict access controls and regularly audit user permissions to ensure they are in line with the principle of least privilege.

The Financial Impact of Security Incidents is Alarming

According to IBM's 2024 "Cost of a Data Breach" report, the global average cost of a data breach in the cloud was $4.88 million per incident, with the healthcare industry experiencing the highest average costs at $9.77 million per breach. Additionally, misconfigurations were estimated to have cost organizations over $3.18 trillion in 2023, due to the combined expenses of lost revenue, remediation efforts, and regulatory fines. These figures highlight the financial impact that cloud security failures can impose.

Hybrid Cloud is still an Option

Cloud security concerns are still a significant factor preventing some organizations from fully embracing cloud technology. While many businesses recognize the benefits of moving to the cloud, security concerns often lead to delayed adoption of cloud systems. In some cases, organizations delay cloud migration or implement hybrid solutions. Such organizations often store critical data on-premises while only shifting non-sensitive data to the cloud. This approach allows them to maintain greater control over their most valuable assets but limits the full potential of cloud-based innovation.

Enhance Your Cloud Protection through Advanced Security Strategies

With employees and devices accessing cloud resources from anywhere, Zero Trust assumes that threats could arise both inside and outside the network. The Zero Trust model enforces a "never trust, always verify" approach, ensuring that all users, devices, and applications are continuously authenticated and authorized before accessing resources.

AI and ML automate threat detection, analysis, and response actions. These technologies can also process enormous volumes of data in real-time, enabling security systems to detect anomalies and malicious activities much faster than human analysts. By learning from patterns in cloud traffic and user behavior, AI and ML can anticipate potential cloud security threats and act proactively. However, these technologies are not risk free. Attackers can also use them to launch more advanced attacks that learn how to bypass security systems.

Automated compliance management tools facilitate the monitoring of cloud environments, generate compliance reports, and alert users to any potential violations. These solutions reduce the manual effort required for audits and ensure that organizations stay up to date with changing regulatory standards.

Cloud Security Posture Management (CSPM) solutions address misconfigurations and maintain strong security hygiene across cloud environments. CSPM tools monitor cloud configurations to identify risks such as exposed storage buckets, insecure firewall settings, or overly permissive access controls. Misconfigurations are one of the most common causes of cloud security breaches, and CSPM helps organizations detect and remediate these issues before they can be exploited. As more organizations adopt multi-cloud or hybrid cloud strategies, CSPM provides the visibility and control needed to secure these complex environments.

We are Back in Town - cyberevolution 2024

We are excited to invite you to our cyberevolution event in Frankfurt am Main on December 3-5, 2024. We will be exploring a wide range of cybersecurity topics, with plenty of chances to chat with industry experts. Cloud Security will be one of the big topics on the agenda.

Here are some sessions that might catch your interest:

Cloud Application Security from CNAPP to AINAPP The Cloud Conundrum: Balancing Agility with Security Security at Scale - Mastering Cloud Security in the Cyberwar Era

You can also check out our published Leadership Compasses below:

Leadership Compass – Zero Trust Network Access (ZTNA) Leadership Compass – Cloud Security Posture Management (CSPM) Leadership Compass – Cloud Native Application Protection Platforms (CNAAP)

Lockstep

Money, the Metaverse and David Birch (Making Data Better EP15)

George and I had a virtual blast recently on our podcast with David Birch. As an adviser and global raconteur in payments, identity and digital transformation, Dave needs little introduction. With Meeco COO Victoria Richardson, he has just co-authored a fascinating book, Money in the Metaverse: Digital assets, online identities, spatial computing and why virtual... The post Money, the Metaverse

George and I had a virtual blast recently on our podcast with David Birch. As an adviser and global raconteur in payments, identity and digital transformation, Dave needs little introduction. With Meeco COO Victoria Richardson, he has just co-authored a fascinating book, Money in the Metaverse: Digital assets, online identities, spatial computing and why virtual worlds mean real business.

Dave took us into their thinking about secure, private transactions in the metaverse(s).

Virtual money makes the virtual world go around

Dave was drawn to write a new book after finding it strangely clunky to pay for things in at least one virtual world.

He told us about being at an industry event with lots of people “walking around as avatars and meeting each other”. That all seemed real enough until he wanted to buy something. He had to come out of the metaverse and undergo an all-too-real payment rigmarole—scanning a QR code, then another website, typing in card details—before he could rejoin the virtual fun.

Surely, he thought, “I should be doing things inside the metaverse instead of taking off my VR glasses!”. He enlisted Victoria as co-author, who he describes as a “brilliant digital strategist” with a proper framework for thinking about these things.

The state of the art in self-contained metaverse commerce is all about DeFi, Web 3, tokenisation and cryptocurrency.  Loudly sceptical about these things IRL, Dave says “there’s absolutely no doubt” they will form “the next generation financial market infrastructure”.

Dave has an optimistic and generous view of the metaverse. “It’s early days” (of course) yet he is confident that the metaverse’s many pioneers will continue to refine and innovate and surprise us, taking AR/VR technology in new directions.

He likens Apple’s Vision Pro headset to the Apple Newton of the late 1990s. It wasn’t attractive to typical consumers either, but over time, everyone saw that the Newton was the prototype iPad.  So who’s to say where the Vision Pro will lead?

And I should add that Dave does not think $3,000 for a Vision Pro is unreasonable.

In this blog, I’m going to go deep once more on authenticity in the metaverse (I’ve previously looked at how the metaverse should force a rigorous re-examination of digital identity).

But first, here’s a sample of the areas George and I covered with Dave (don’t forget to take a listen):

In less than 45 minutes, we traversed gaming, brand marketing, car insurance, banking, newspapers and print media, comedy, concert tickets, adult services, COVID, teenage mental health, and virtual girlfriends and boyfriends. Digital says Dave is “the natural UX for young people today. It’s how they meet their friends, how they socialize, how they connect. So, in a very short time, brands are going to need to be in those spaces as well.” On ownership and tokenisation: “[The] amount of effort that’s already going into the proto-metaverses is substantial, but it’s hamstrung by the fact that the things that they build aren’t theirs. They belong to the platform.” On economics, in-built platform security is such an imperative that Dave and Victoria see virtual worlds as potentially safer and more efficient than the real world. As a result, transaction costs will fall, and businesses in all sectors will feel pressure to move into the metaverse. Real authenticity

When we turned to authenticity, Dave set the scene as follows:

“Of course, in the metaverse nothing’s real, putting to one side what real means … we certainly don’t want the metaverse to end up in the mess that we’re in at the moment with the internet where we see fake [TV personalities] shilling cryptocurrency”.

Cryptographic security must be “part of the warp and weft” of a new infrastructure, in a way that we simply overlooked in the rush to Web 1 and Web 2.  Dave points out that a whole “panoply of keys, key generation, certificates, digital signatures and encryption” was missing from the internet.  He is a forceful champion of security being inherent to the infrastructure; on this point he calls himself a “maximalist”.

What would such security look like? Well, we might not even notice it. Crucially, Dave does not imagine us having to prove our bona fides by showing pictures of virtual driver licences. I agree; it would be moronic to simulate a superficial verification process when it is so bad in real life.

Instead, Dave foresees metaverse platforms just knowing your authorisation attributes and applying them to covertly regulate your virtual experience. So, if for example you’re not 18 years old and you approach an age-restricted venue or event, then you won’t even have the option of going in.

“In any metaverse I’d want to take part in, if a photo doesn’t have a digital signature that says ‘this comes from the New York Times’ or ‘from George Peabody’, I don’t want to even see it.”

So, one crucial distinction he sees between the metaverse and any virtual world built so far on the internet, is that authenticity will be part of the infrastructure.

In a sense, everything in a Dave Birch metaverse will be real!

Questions

A simulated world in which everything we see is true could save digital civilisation. But we need to approach any Utopia with caution.

What’s real in an unreal world? What is truth? If the answer is everything’s relative, then authenticity will need to be configurable.

Beauty is in the eye of the beholder, and authenticity in the metaverse needs to be in the hands of the beholder as well.

The point of the metaverse is to shift reality. If users have any freedom to adjust what’s real, then they will need to set their own authenticity standards. I might for example be able to have the BBC determine what political stories are true as far as I am concerned and have New Yorker film critics control my cinema experience.

Inevitably, beneath any metaverse, are the unseen platforms. As we discussed with Dave, platforms have had most of the control so far. Dave calls for a shift in control and asset ownership from landlords to denizens.

There are many privacy issues. If a metaverse platform knows my personal attributes and applies them to shape my virtual experience (such as removing pubs and clubs from my experience if I am under-age) then the platform must be watching what I am trying to do around the clock.

I guess that’s a price users could pay for the seamlessness of having the world “know” them without having to see a virtual ID card. That trade-off might be perfectly fine—if we trust the platforms, and/or they closely regulated.

If metaverses even come close to mimicking the richness of the real world, the platforms will have unprecedented executive control over our activity. They will literally direct what we experience and even how we behave, because the platforms’ software will mediate our very existence in the worlds.

Is the metaverse going to need benign meta-dictators?

More on Money in the Metaverse

Reviewed by Irish Tech News, May 2, 2024.

Dave was interviewed on the Pay it Forward podcast, June 28, 2024.

Victoria and Dave were interviewed on The Banker, July 10, 2024.

 

 

The post Money, the Metaverse and David Birch (Making Data Better EP15) appeared first on Lockstep.


Tokeny Solutions

21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets

The post 21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets appeared first on Tokeny.

LUXEMBOURG, 10 September 2024 – 21X and Tokeny have announced today that they have signed a strategic partnership as they look to revolutionize capital markets. Having developed the very first DLT trading and settlement system (DLT TSS) under the European Union’s DLT regime, 21X is teaming up with Tokeny – the leading onchain finance operating system – to allow issuers using Tokeny’s white-label platform or APIs to admit financial instruments to trading on 21X.

21X’s smart contract-based trading venue allows participants for the first time to undertake fully regulated trading of financial instruments according to the EU DLT Regime. 21X is working with a number of tokenization companies to permit matching, trading and settlement of tokenized assets – and now includes Tokeny, the leading tokenization platform.

As part of its collaboration, Tokeny connects DINO, the distribution network for tokenized real-world assets (RWA) and securities with 21X’s market infrastructure to foster liquidity and tradability of ERC-3643-based assets. Acting as the interoperable distribution network for tokenized assets, DINO plays a pivotal role in the digital asset ecosystem with an extensive reach of over 50 liquidity platforms, providing the flexibility for ERC-3643-based tokenized securities to be listed and traded seamlessly across any of these blockchain-based channels.

Tokeny and 21X are enhancing compatibility between ERC-3643 tokens and the DLT trading and settlement system of 21X. By providing access to 21X, Tokeny’s customers will be able list their assets on an ESMA-regulated secondary market, ensuring end-to-end compliance for issuers and investors. Meanwhile, clients of 21X gain access to Tokeny’s white label tokenization solutions, providing them with the ability to issue, manage, and distribute tokenized securities with a no-code platform while expanding their investor base through liquidity pools and participants within the DINO distribution network.

21X is partnering with Tokeny, one of the world’s leading asset tokenization providers, with over 120 customers and almost $28 billion in assets tokenized, to date. Everybody is working hard to have all the elements of our digital asset ecosystem in place to go-live by the end of 2024. This includes building strong partnerships with the likes of Tokeny and we are looking forward to their customers’ digital assets being available to trade on 21X as soon as our exchange begins operating. Max J. HeinzleFounder & CEO of 21X We're excited to team up with 21X to not only allow our customers to list tokenized securities on 21X but also to collaborate in building liquidity rails by expanding the DINO distribution networks to support ERC-3643 token issuers. Together, we are laying the foundation for the future of tokenization. Luc FalempinCEO Tokeny About 21X

21X is a Frankfurt-based fintech, developing a blockchain-powered exchange for tokenized assets, which will operate under the regulatory supervision of the European Securities and Markets Authority (ESMA).

With the institutional adoption of tokenized securities, 21X is ideally positioned to enable smart contract-based issuance, trading and settlement of tokenized stocks, bonds and funds. 21X has submitted its license application to operate a DLT trading and settlement system (DLT TSS) and is expected to be one of the first companies authorized to operate under the EU DLT regime.

See the short explainer video on 21X and our blockchain-based exchange here.

About Tokeny

Tokeny provides the leading onchain finance operating system, leveraging market standards like ERC-3643, to bring control, compliance, and efficiency in the era of open finance. It enables seamless issuance, transfer, and management of tokenized securities. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post 21X and Tokeny Collaborate to Expand Global Liquidity and Tradability of Tokenized Real-World Assets appeared first on Tokeny.


IDnow

IDnow’s YRIS solution obtains Substantial Level of Assurance for digital identities according to eIDAS

With the latest certification of French Cybersecurity Agency (ANSSI), YRIS is now eligible to be featured on FranceConnect+ Munich/Rennes, September 10, 2024 – IDnow, a leading identity verification platform provider in Europe, has received the security Visa from French Cybersecurity Agency (ANSSI) recognizing the Substantial Level of Assurance (LoA) certification for digital identities for its […]
With the latest certification of French Cybersecurity Agency (ANSSI), YRIS is now eligible to be featured on FranceConnect+

Munich/Rennes, September 10, 2024 – IDnow, a leading identity verification platform provider in Europe, has received the security Visa from French Cybersecurity Agency (ANSSI) recognizing the Substantial Level of Assurance (LoA) certification for digital identities for its YRIS digital identity wallet. The LoA is defined by the European eIDAS regulation (electronic Identification, Authentication and Trust Services) and was certified by the Agence nationale de sécurité des systèmes d’information (ANSSI).

Seamless reuse of verified digital identity credentials

YRIS was first launched in June 2022 and allows the seamless reuse of verified digital identity credentials. It enables users to easily and securely prove their identity without having to scan a physical ID document and their face each and every time access to a service is needed. The strength of YRIS also lies in the fact that it allows all French citizens to create this digital identity based on the old French national ID card, the new national ID card, and the residence permit.

Today, more than 450,000 users in France are using YRIS in their day-to-day lives via FranceConnect, the national digital identity federator, where users authenticate or identify themselves for eGovernment and other regulated services in France. The new certification also qualifies YRIS to be featured on FranceConnect+, and would thus make another digital identity provider available on the platform.

FranceConnect+ is similar to FranceConnect but its Substantial LoA provides an eIDAS node that will permit mutual recognition of French citizens on services in other European Union member states with their French digital identity. It can be used to carry out administrative procedures with more stringent user identification requirements, such as using training credits, obtaining subsidies, etc. It can also be used to generate qualified electronic signatures, to send or receive electronic registered mail, and to meet identification requirements for financial transactions subject to AML-CTF regulations.

Authentication and verification in financial services, insurance, HR sectors and electronic registered mail

Besides possible integration on FranceConnect+, YRIS can also be used for proof of identity and as a secured method of strong authentication in the financial or insurance industries, and in human resources. Several use cases, such as financial account opening, insurance contracts, loans or rental agreements, can now be processed via YRIS thanks to the new Substantial LoA. Based on the eIDAS regulation, YRIS can also be used by providers of electronic registered mail services as a compliant method for identifying the recipient, a promising market for mail replacement.

“This certification is the latest company milestone for IDnow, which remains committed to playing a key role in Europe’s ambition to create and offer a single, reliable and secure digital identity to its citizens and residents,” says Marc Norlain, Managing Director and Head of the Reusable Identities Unit at IDnow.

“With their reusable digital identities, end users in France will be able to open a bank account or carry out any banking operation, perform a qualified electronic signature, open an online gaming account, or send or receive an electronic registered letter. We are at a pivotal moment in the digital identity ecosystem in France and Europe overall and IDnow is proud to lead the way with our expertise and our proven solutions.”


Veridium

Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge

Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge   We’re excited to announce that Veridium will be joining forces with our strategic partner IGEL at IGEL Disrupt 2024! This flagship event is the premier gathering for cloud workspaces and digital transformation enthusiasts, and we can’t wait to showcase how Veridium’s cutting-edge identity […]
Veridium Joins IGEL at Disrupt 2024: Elevating Security for the Edge

 

We’re excited to announce that Veridium will be joining forces with our strategic partner IGEL at IGEL Disrupt 2024! This flagship event is the premier gathering for cloud workspaces and digital transformation enthusiasts, and we can’t wait to showcase how Veridium’s cutting-edge identity authentication solutions complement IGEL’s advanced edge computing environments.   As a pioneer in revolutionizing user identity security, Veridium empowers organizations to enhance their security posture through our Identity Assurance Platform. By reliably verifying user identities and devices, we ensure that your digital workspaces are protected by AI-based identity threat protection and continuous authentication. Our platform addresses a fundamental security challenge: accurate and secure user authentication from start to finish—across virtual desktops, cloud workspaces, and beyond.   Veridium’s platform integrates seamlessly with existing Identity/SSO providers, while extending security to ZTNA, MDM, and EDR solutions. We offer the widest range of authenticators on the market, including passwordless and phishing-resistant options, FIDO tokens, and patent-protected biometric solutions (such as contactless fingerprints, facial recognition, and behavioral biometrics). Whether your organization is beginning its identity and access management (IAM) journey or refining mature processes, Veridium ensures consistent, secure authentication that keeps pace with evolving threats.   At Disrupt 2024, join us to discover how Veridium and IGEL are transforming secure access for the modern digital workspace. Experience our live demos and hear from our experts on how we’re enabling secure, seamless, and scalable solutions across VDI and DaaS environments.   Special Offer: Use coupon code DISRUPT24EXCLUSIVE to get your ticket for just 120 Euros!   Read our Data Sheet to learn more about our IGEL integration! Stay tuned for updates, and we look forward to seeing you at IGEL Disrupt 2024!

PingTalk

Ping Identity: Leading the Future of Passwordless Authentication

Eliminate passwords and user friction with Ping Identity. Learn why we're leaders in passwordless authentication in the latest Leadership Compass report.

Passwords are a security nightmare and are the biggest cause for user friction. However, getting rid of them in your environment may need a platform approach. The latest Leadership Compass report on Passwordless Authentication for Enterprises highlights Ping Identity as a leader in this space. Here's an in-depth look at why Ping Identity stands at the forefront of passwordless authentication for enterprises.


What is Banking as a Service (BaaS)?

Understand Banking as a Service (BaaS), its relation to embedded finance, and crucial identity security practices for providers.

Banking as a service (BaaS) is a model that allows non-bank businesses to offer financial services by integrating banking capabilities directly into their own products. This article will explain BaaS, how it works, and why identity and access management (IAM) solutions are necessary for earning trust. You'll also learn how IAM, including both customer identity and access management (CIAM) and workforce identity, enables BaaS to function securely and efficiently.


Okta

Secure OAuth 2.0 Access Tokens with Proofs of Possession

In OAuth, a valid access token grants the caller access to resources and the ability to perform actions on the resources. This means the access token is powerful and dangerous if it falls into malicious hands. The traditional bearer token scheme means the token grants anyone who possesses it access. A new OAuth 2.0 extension specification, Demonstrating Proof of Possession (DPoP), defines a standa

In OAuth, a valid access token grants the caller access to resources and the ability to perform actions on the resources. This means the access token is powerful and dangerous if it falls into malicious hands. The traditional bearer token scheme means the token grants anyone who possesses it access. A new OAuth 2.0 extension specification, Demonstrating Proof of Possession (DPoP), defines a standard way that binds the access token to the OAuth client sending the request elevating access token security.

The high-level overview of DPoP uses public/private keys to create a signed DPoP proof that the authorization and resource server use to confirm the authenticity of the request and requesting client. This way, the token is sender-constrained, and a token thief is less likely to use a compromised access token. Learn more about the problems DPoP solves and how it works by reading:

Elevate Access Token Security by Demonstrating Proof-of-Possession

Protect your OAuth 2.0 access token with sender constraints. Learn about possession proof tokens using DPoP.

Alisa Duncan

The primary use case for DPoP is for public clients, but the spec elevates token security for all OAuth client types. Public clients are applications where authentication code runs within the end user’s browser, such as Single-Page Applications (SPA) and mobile apps. Due to their architecture, public clients inherently have higher risk and less security in authentication and authorization. Public clients can’t leverage a client secret used by application types that can communicate to the authorization server through a “back-channel,” a network connection opaque to users, network sniffing attackers, and nosy developers. Without proper protection, a SPA may store tokens accessible to the end-user and injection-related attacks. DPoP adds an extra protection layer that makes tokens less usable if stolen.

Table of Contents

Get the starting Angular, React, or Vue project Add OAuth 2.0 and OpenID Connect (OIDC) to your application Configure OAuth scopes for Okta API calls Inspect the OAuth 2.0 bearer tokens and request resources manually Use secure coding techniques to protect your web apps Migrate your SPA to use DPoP Trace the token request requiring a DPoP nonce Request resources using DPoP headers Manually request DPoP-protected resources Store cryptographic keys in browser applications Use modern evergreen browsers for secure token handling Learn more about web security, DPoP, and OAuth 2.0

In this post, you’ll experiment with DPoP and step through migrating a public client application using OAuth bearer tokens compared to DPoP tokens. We’ll build upon the existing OAuth 2.0 Authorization Code flow. Need a refresher? Check out this post:

How Authentication and Authorization Work for SPAs

Authentication and authorization in public clients like single-page applications can be complicated! In this post, we'll walk through the Authorization Code flow with Proof Key for Code Exchange extension to better understand how it works and what do with the auth tokens you get back from the process.

Alisa Duncan

Note

This code project is best for developers with web development experience, knowledge of debugging network requests and responses, and familiarity with OAuth and OpenID Connect (OIDC).

The post uses Angular, but you can follow the concepts and network calls using a sample project in your favorite SPA framework. Check out samples using React or Vue. You’ll need to make a couple of minimal changes to the code. I will call out the changes, but I will not post the specific code or instructions.

Are you following the step-by-step code instructions in Angular? This post assumes you already have Angular knowledge. If you are an Angular newbie, start by building your first Angular app using the tutorial created by the Angular team.

A hands-on project requires tools for local web development.

Prerequisites

You’ll need the following tools:

Node.js v18 or greater A web browser with good debugging capabilities, such as Chrome Your favorite IDE. Still looking? I like VS Code and WebStorm because they have integrated terminal windows. Terminal window (if you aren’t using an IDE with a built-in terminal) Git and an optional GitHub account if you want to track your changes using a source control manager An HTTP client that shows the HTTP requests and responses, such as the Http Client VS Code extension or curl Get the starting Angular, React, or Vue project

You’ll use a starter project. These instructions are for the Angular sample project. If you are following along in React or Vue, replace the GitHub repo location with the URL for the sample you’re using.

Open a terminal window and run the following commands to get a local copy of the project in an okta-client-dpop-project directory and install dependencies. Feel free to fork the repo so you can track your changes.

git clone https://github.com/oktadev/okta-angular-dpop-example.git okta-client-dpop-project cd okta-client-dpop-project npm ci

Open the project in your favorite IDE. The project includes Okta’s client authentication SDKs, a sign-in button, a profile route that displays user information by calling the OIDC /userinfo endpoint, and a route that makes a call to Okta’s Users API. Both HTTP requests require an access token, so we’ll follow the requests and responses for these two calls.

React and Vue project instructions

React and Vue projects need a couple of changes. Change the profile component to call oktaAuth.token.getUserInfo() and display the JSON output. Add a call to Okta’s User API /api/v1/users. You’ll replace the domain name later. You may want to create a new Users component (and route) to match the Angular sample.

Use the SDK reference docs for React and Vue.

You need to set up an authentication configuration to serve the project. Let’s do so now.

Add OAuth 2.0 and OpenID Connect (OIDC) to your application

You’ll use Okta to handle authentication and authorization in this project securely. Okta APIs have built-in DPoP support — how secure and handy! We’ll experiment with DPoP in the client application by calling Okta’s APIs.

React and Vue project instructions

Replace the two redirect URIs to match the port and callback route for the application. You’ll find the URI for both in your project’s README file. Follow the instructions in the README to add the issuer and client ID to the app. Use the format for the issuer. Notice this is different from the starter code.

Before you begin, you’ll need a free Okta developer account. Install the Okta CLI and run okta register to sign up for a new account. If you already have an account, run okta login. Then, run okta apps create. Select the default app name, or change it as you see fit. Choose Single-Page App and press Enter.

Use http://localhost:4200/login/callback for the Redirect URI and set the Logout Redirect URI to http://localhost:4200.

What does the Okta CLI do?

The Okta CLI will create an OIDC Single-Page App in your Okta Org. It will add the redirect URIs you specified and grant access to the Everyone group. It will also add a trusted origin for http://localhost:4200. You will see output like the following when it’s finished:

Okta application configuration: Issuer: https://dev-133337.okta.com/oauth2/default Client ID: 0oab8eb55Kb9jdMIr5d6

NOTE: You can also use the Okta Admin Console to create your app. See Create an Angular App for more information.

Note the Issuer and the Client ID. You’ll need those values for your authentication configuration, which is coming soon.

There’s one manual change to make in the Okta Admin Console. Add the Refresh Token grant type to your Okta Application. Open a browser tab to sign in to your Okta developer account. Navigate to Applications > Applications and find the Okta Application you created. Select the name to edit the application. Find the General Settings section and press the Edit button to add a Grant type. Activate the Refresh Token checkbox and press Save.

Leave the Okta Admin console open. You’ll continue making changes in there.

I already added Okta Angular and Okta Auth JS libraries to connect our Angular application with Okta authentication.

In your IDE, open src/app/app.config.ts and find the OktaAuthModule.forRoot() configuration. Replace {yourOktaDomain} and {yourClientID} with the values from the Okta CLI.

Configure OAuth scopes for Okta API calls

We’re calling an Okta API, so we must add the required OAuth scopes.

In the Okta Admin Console, navigate to the Okta API Scopes tab in your Okta application. Find the okta.apps.read and okta.users.read.self and press the ✔️ Grant button for each.

Open the src/app/users/users.component.ts and find the call to list users: /api/v1/users. We’re taking shortcuts here, such as calling the API directly in the component for this demonstration project. In production-quality Angular apps, ensure you architect your application following best practices so you can add automated tests and troubleshoot issues quickly.

Replace {yourOktaDomain} with your Okta domain.

React and Vue project instructions

Add the two scopes to the OIDC configuration for the application. Search for “scopes” and change the array to

scopes: ['openid', 'profile', 'email', 'offline_access', 'okta.users.read.self', 'okta.apps.read'],

Replace the {yourOktaDomain} in the Okta Users API call you added in the prior section.

Start the app by running:

npm start

Open a browser tab to view the app. Open the debugging view that shows the console and network requests. Since I am using Chrome, I’ll open DevTools. Enable Preserve log in the Console and Network tabs. For the Console tab, you’ll find the preserve log option after opening the gear menu.

Let’s ensure you can sign in, call the /userinfo endpoint to see your user information, and call Okta Users API. You’ll use the Authorization Code flow and redirect to Okta for the authentication challenge. Once you emerge victorious by assuring the identity provider you are who you claim to be, the authorization server redirects you back to the application. The redirect URI includes the authorization code. Okta’s SDK (the OIDC client library) calls the /token endpoint to exchange the authorization code for tokens.

After you sign in, the Angular app will display routes for “Profile” and “Users.” Navigating these routes calls the /userinfo and Users API. If you can access the routes and don’t see any HTTP request errors, you’re good to go!

Inspect the OAuth 2.0 bearer tokens and request resources manually

After signing in, you have the OAuth 2.0 access token and the OIDC ID token. Okta stores the tokens in browser storage. In DevTools, open the Application tab to view browser storage data. Okta Auth JS defaults to local storage for tokens and is configurable based on your application needs. Expand Local storage, select the application, and expand the okta-token-storage key to see the tokens and token metadata. The tokenType property is Bearer.

Let’s see the API calls in action in the application. Navigate to both routes. In the Network tab, you see the initial /token, /userinfo, and Users API requests.

Let’s inspect the Users API request.

The request includes the Authorization header containing the token scheme and access token. You see the format Bearer <access_token>.

The entity holding the token can legitimately request resources. Let’s try using the token in another client and impersonating the actions an attacker can take if they manage to capture it.

Note

Access tokens expire quickly. If too much time passes in these next steps, you may get a 401 Unauthorized. If you do, repeat the steps with a more recent access token by navigating between the profile and user routes to trigger a call to the API. It prompts the OIDC client (the Okta Auth JS SDK) to update expired tokens.

Copy the token from the browser, and double-check you captured the entire token. Open your HTTP client and run the following HTTP request replacing {yourOktaDomain} and {yourAccessToken}:

GET /api/v1/users HTTP/1.1 Authorization: Bearer {yourAccessToken}

If you use curl, add the verbose flag to see the request and response headers:

curl -v --header "Authorization: Bearer {yourAccessToken}" /api/v1/users

The call succeeds even though the HTTP client isn’t the same client the authorization server issued the token to (the sample app).

Let’s call another endpoint with the same access token, the Okta Applications endpoint. Run the following HTTP request replacing {yourOktaDomain} and {yourAccessToken}:

GET https://alisa.oktapreview.com/api/v1/apps HTTP/1.1 Authorization: Bearer {yourAccessToken}

The call succeeds even though you call from a different client, like you saw in the prior step, calling the Users API. The call succeeds for a privileged user as long as the Okta Application has the okta.apps.read and the OIDC config has the scope. You may say that’s a lot of constraints, and you’re right. Okta adds a lot of guards when making API requests about the resources in the top-level Okta org, such as the list of Okta applications. This example demonstrates how powerful and vulnerable tokens issued for privileged users like admins are. Anyone with the token can make the same request, even if they are an attacker.

Back in the app, sign out to clear the authenticated session and tokens. We’re making changes that require you to sign in from scratch.

Use secure coding techniques to protect your web apps

All web applications must use secure coding techniques to protect from attacks, breaches, and malicious use. Public clients store their tokens within the user’s hardware and require thoughtful security practices. Read more about SPA web security and security practices within Angular in this four-part series:

Defend Your SPA from Security Woes

Learn the basics of web security and how to apply web security foundation to protect your Single Page Applications.

Alisa Duncan

It doesn’t matter if your application uses bearer tokens or DPoP; apps must employ secure coding practices. DPoP doesn’t prevent attackers from stealing your token but constrains its use. DPoP uses asymmetric encryption to prove token ownership, so you must avoid exfiltration or unauthorized use of the keyset. An attacker can create valid proofs if they get a hold of the private key.

Let’s migrate the application to DPoP and try making these HTTP requests again.

Migrate your SPA to use DPoP

Open the Okta Admin Console in the browser and navigate to Applications > Applications. Find the Okta application for this project. In the General tab, find the General Settings section and press Edit. Check the Proof of possession checkbox requiring the DPoP header in token requests. Press Save. Sign out of the Okta Admin Console.

If you try signing in again without making any code changes, you’ll see an error in the Network tab for the /token request:

HTTP/1.1 400 Bad Request { "error": "invalid_dpop_proof", "error_description": "The DPoP proof JWT header is missing." }

All HTTP requests to DPoP-protected resources (including the /token request) require proof. We must enable DPoP in the OIDC configuration.

The Okta Auth JS SDK has a configuration property for DPoP as part of the OIDC config. In your IDE, open src/app/app.config.ts and find the OktaAuthModule.forRoot() configuration. Add the dpop: true property. Your OIDC config will look something like this:

{ issuer: ..., clientId: ..., redirectUri: ..., scopes: ['openid', 'profile', 'offline_access', 'okta.users.read.self', 'okta.apps.read'], dpop: true }

Once the application rebuilds and reloads in the browser, make sure you have debugging tools open and then sign in.

Trace the token request requiring a DPoP nonce

When you sign in, you’ll see the initial call to the /token endpoint fails.

Take a look at the call’s request headers. You’ll see a header called DPoP, which contains the DPoP proof in JWT format, which means we can decode it and inspect its contents. You can use a trustworthy online tool such as JWT.io debugger or Base64 decode the header and payload sections of the JWT locally. In the JWT format, the content from the beginning up to the first .</kbd> character is the header, and the content between the two . characters is the payload.

The header contains the token type, dpop+jwt, the encryption algorithm, and the cryptographic key information tied to this proof. The payload includes minimal HTTP information and other properties to prevent token attack vectors.

{ "alg": "RS256", "typ": "dpop+jwt", "jwk": { /* Key information in JSON Web Key format */ } } { "htm":"POST", "htu":"/oauth2/v1/token", "iat":1724685617, "jti": "e84a...283bbf", }

Why did the initial call to /token fail? It’s because Okta requires an extra handshake that elevates security. The /token call requires a DPoP nonce that Okta provides included in the DPoP proof. In response to the first /token call, Okta returns the standard DPoP nonce error and the DPoP-Nonce response header containing the nonce the client incorporates into the proof.

HTTP/1.1 400 Bad Request DPoP-Nonce: "SVD....ubNc" { "error": "use_dpop_nonce", "error_description": "Authorization server requires nonce in DPoP proof." }

Okta’s Auth JS SDK has built-in support for DPoP-Nonce errors. Look at the DPoP proof token’s payload of the successful /token request. The payload includes the nonce returned in the first call.

{ "htm":"POST", "htu":"/oauth2/v1/token", "iat":1724685617, "jti": "e852...28396", "nonce":"SVD....ubNc" }

The token request succeeds, and we now have a DPoP access token.

Request resources using DPoP headers

In the app, navigating to view your profile succeeds because the SDK supports DPoP resource requests. You’ll see an error when navigating the “Users” route that calls Okta’s User API.

The HTTP response includes information about why the call errored.

HTTP/1.1 400 Bad Request WWW-Authenticate: Bearer authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.users.read.self", error="invalid_request", error_description="The resource request requires a DPoP proof.", resource="/api/v1/users"

The current code to make the Users API call adds the access token using the Bearer scheme in the Authorization header, but that’s incorrect for DPoP. We must incorporate the DPoP proof and change the scheme in the HTTP request.

Open the auth interceptor in the IDE. You can find the code in the src/app/auth.interceptor.ts file.

React and Vue project instructions

Find the code you added to request Users and incorporate the Angular instructions in the project to add the DPoP proof header and the DPoP scheme.

The interceptor has a check to ensure it adds the access token to allowed origins only. Change the interceptor code as follows:

export const authInterceptor: HttpInterceptorFn = (req, next, oktaAuth = inject(OKTA_AUTH)) => { let request = req; const allowedOrigins = ['/api']; if (!allowedOrigins.find(origin => req.url.includes(origin))) { return next(request); } };

We need the proof and the authorization header. We’ll generate both using Okta Auth JS. The SDK method requires the HTTP method and URI we intend to call. The URI shouldn’t include query parameters or fragments. The SDK method returns an object with properties matching headers and their values, so we can use the spread operator to populate the DPoP-required headers.

Change the interceptor to match the code below.

import { DPoPHeaders } from '@okta/okta-auth-js'; import { defer, map, switchMap } from 'rxjs'; export const authInterceptor: HttpInterceptorFn = (req, next, oktaAuth = inject(OKTA_AUTH)) => { // allowed origin check const url = new URL(req.url); return defer(() => oktaAuth.getDPoPAuthorizationHeaders({url: `${url.origin}${url.pathname}`, method: req.method})).pipe( map((dpop: DPoPHeaders) => req.clone({ setHeaders: { ...dpop } })), switchMap((request) => next(request)) ); };

Now, if you sign in and call the Users API, you’ll get the list of users in your Okta org using DPoP.

Manually request DPoP-protected resources

Earlier, we pretended to steal the access token to make other resource requests. You called the Okta Apps API using a JWT token to see the list of all the apps your Okta org contains. What happens if we try this again when the API requires DPoP?

In DevTools, open the Network tab and find the /users call. You need both the proof and the access token for your HTTP call. Make an HTTP request:

curl -v --header "Authorization: DPoP {yourAccessToken}" --header "DPoP: {yourDPoPProof}" /api/v1/apps

The API rejected your request! You get back an error stating the DPoP proof isn’t valid:

HTTP/1.1 400 Bad Request WWW-Authenticate: DPoP algs="RS256 RS384 RS512 ES256 ES384 ES512", authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.apps.read", error="invalid_dpop_proof", error_description="'htu' claim in the DPoP proof JWT is invalid."

If an attacker manages to capture both the proof and the token, they may only be able to make the same request. The proof constrains the calls to the HTTP method and URI, invalidating other HTTP requests.

How about making the same request?

curl -v --header "Authorization: DPoP {yourAccessToken}" --header "DPoP: {yourDPoPProof}" /api/v1/users

The API rejected your request! You still get back an error stating the DPoP proof isn’t valid:

HTTP/1.1 400 Bad Request WWW-Authenticate: DPoP algs="RS256 RS384 RS512 ES256 ES384 ES512", authorization_uri="http://{yourOktaDomain}/oauth2/v1/authorize", realm="http://{yourOktaDomain}", scope="okta.users.read.self", error="invalid_dpop_proof", error_description="The DPoP proof JWT has already been used.", resource="/api/v1/users"

The proof also has two other protection mechanisms: the JWT unique identifier (jit) and the issued at time (iat). When a resource server enforces the jit claim, it tracks previous calls to prevent proof reuse. So, an attacker can’t replay the proof and the access token they stole. Enforcing the JWT ID isn’t required in the DPoP spec. Another protection mechanism is the proof issue timestamp, the iat claim. Resource servers check the issue time on the proofs, and if it exceeds some threshold determined by the resource server, the server will reject the request.

Store cryptographic keys in browser applications

We must securely store the keyset within the SPA and prevent an attacker from exfiltrating them. If an attacker has the keyset, they can impersonate you and make DPoP-protected calls. Fortunately, Okta SDK uses a few different techniques to mitigate keyset hijacking without any extra coding on your part.

Local and session storage aren’t secure enough; this time, we’ll rely on IndexedDB storage. The typical use case for IndexedDB is storing a large volume of data, but it has some built-in security mechanisms that work well for protecting the keyset. The SubtleCrypto API supports generating non-exportable keys, preventing browser code from turning the private key into a portable format. IndexedDB stores the keys as a CryptoKeyPairs object and DB query results return a reference to the object, not the raw key. IndexedDB protects sensitive private keys but still works with the WebCrypto methods for signing proof.

You can inspect the keys by following the steps:

Navigate to the Applications tab in DevTools Expand IndexedDB under the Storage sidenav Expand OktaAuthJs > DPoPKeys

The downside is that the IndexedDB API is more difficult to use than other browser storage APIs. Because IndexedDB data persists, we must clean up the keys when done manually. The SDK handles cleanup if the user explicitly signs out, but we can’t guarantee a user always will. We can clear keys before signing in.

Open src/app/app.component.ts to find the signIn() method.

React and Vue project instructions

Find the code where the project calls the signInWithRedirect() method and follow the instructions described for Angular projects.

Add the call to clear keys as the first step in the signIn() method:

public async signIn() : Promise<void> { await this.oktaAuth.clearDPoPStorage(true); await this.oktaAuth.signInWithRedirect(); } Use modern evergreen browsers for secure token handling

Creating and storing cryptographic keys in JavaScript apps requires a capable browser. Modern, evergreen browsers have the API support required for DPoP. Check browser capability if your app supports users who use less modern, more questionable browsers. The Auth JS SDK has a method to check browser capability, authClient.features.isDPoPSupported(). You can add this check during application bootstrapping or initialization.

Remember, even if you aren’t using DPoP, modern browsers have more built-in security mechanisms. Stay secure, stay updated, and use safe browser practices whenever possible.

Learn more about web security, DPoP, and OAuth 2.0

In this post, you applied DPoP to a SPA and inspected DPoP in action. I hope you enjoyed it! If you want to learn more about the ways you can incorporate authentication and authorization security in your apps, you might want to check out these resources:

OAuth 2.0 and OpenID Connect overview The Identity of OAuth Public Clients Add Step-up Authentication Using Angular and NestJS Configure OAuth 2.0 Demonstrating Proof-of-Possession

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!

Monday, 09. September 2024

liminal (was OWI)

Link Index for Customer Identity and Access Management

The post Link Index for Customer Identity and Access Management appeared first on Liminal.co.

Finicity

FinovateFall 2024: Open banking and AI set the stage for financial innovation 

When banking, fintech and finance leaders gather in New York at one of the leading fintech conferences, FinovateFall, on September 9-11, two broad topics will dominate the agenda: how new… The post FinovateFall 2024: Open banking and AI set the stage for financial innovation  appeared first on Finicity.

When banking, fintech and finance leaders gather in New York at one of the leading fintech conferences, FinovateFall, on September 9-11, two broad topics will dominate the agenda: how new regulations and the proliferation of behavioral data is enabling the age of open banking, and how artificial intelligence (AI) and machine learning can accelerate new product development, improve the customer experience and boost profits. 

Just as we expect streaming entertainment apps to offer us personalized choices, consumers and businesses today demand more digital, personalized services from their financial institutions. For decades, banks and financial institutions operated on closed ecosystems: in-person relationships were key, data was sequestered in core banking and card systems, and third-party data came from credit bureaus.  

That’s been changing recently as more businesses and consumers embrace open banking, both in response to fintech innovation and evolving data and privacy regulations. Today, application programming interfaces (APIs) enable third parties to offer services that complement bank services. In addition, new rules give consumers more control over their data and its use. These circumstances are combining to fuel a revolution in financial services

A critical topic at FinovateFall will be how financial institutions can adapt to new Consumer Financial Protection Bureau (CFPB) rules, expected to be finalized in the coming months. The new regulations will formally establish the U.S. rules for open banking. Mastercard’s Head of Data Access and Business Development for Open Banking Ben Soccorsy will speak about how all this paves the way for a bold open banking future, discussing the opportunities posed by the new rules and how banks should address them to become a data recipient,  enhance customer experience, drive innovation and, ultimately, boost profits. 

New research emphasizes the importance of open banking  

Both businesses and consumers have welcomed open banking. According to a forthcoming global Mastercard research report set to be published in September 2024, embracing open banking will be crucial to both business-to-business partnerships and maintaining consumer relationships. Among B2B survey respondents, 92% said using AI to safeguard consumer data and streamline processes is an important consideration when selecting open banking partners. Businesses also hope that open banking can improve their profitability (69%), boost their revenue (66%) and increase productivity/efficiency (65%). 

Mastercard’s Senior Vice President for Open Banking Network Services Ryan Beaudry also speaks at FinovateFall, discussing how AI and machine learning can improve such things as account-to-account payments. That’s crucial because 80% of U.S. consumers already link their financial accounts and 66% are likely to connect their bank accounts to an app or service in the future, according to the 2024 Mastercard survey.  

The same survey also found that how financial institutions handle data and open banking is important to consumers. Indeed, many of the features that attract U.S. consumers to engage with a financial services company—efficiency, convenience, security and privacy—are driving open banking innovations.  

Asked to name the top considerations when choosing which financial institutions to do business with, more than 90% of consumers said their top four priorities were: keeping their data secure, a convenient customer experience, greater control over how their data is used, and the ability to process transactions quickly.  

Once again, FinovateFall brings together thousands of senior decision-makers from financial institutions, fintechs and the investing community. With consumers and businesses becoming more digitally savvy and hungry for new innovations in how they interact with their finances, start-ups and public companies alike will show off their latest products and innovations.  

As keynote speaker and customer experience strategist Ken Hughes said ahead of the conference, “We are in a perfect storm of change, and we need to ensure that the financial services of today are fit for the customer of tomorrow.” 

If you’re at FinovateFall yourself, make sure to meet up with our open banking experts or reach out to them directly with any questions about your open banking opportunities. You can also visit our home for everything open banking and deep dive into some of our inspirational use cases.  

CFPB Guide CFPB Compliance Account Opening Payment Enablement Business Solutions

The post FinovateFall 2024: Open banking and AI set the stage for financial innovation  appeared first on Finicity.


Ocean Protocol

Season 5 of the Ocean Zealy Community Campaign!

We’re happy to announce Season 5 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members. 💰 Reward Pool 5,000 Ocean Tokens ($FET) that will be rewarded to the Top100 users in our leaderboard 🚀 📜Program Structure Season 5 of the Ocean Zealy Community Campaign will feature more engaging tasks

We’re happy to announce Season 5 of the Ocean Zealy Community Campaign, an initiative that has brought together our vibrant community and rewarded the most active and engaged members.

💰 Reward Pool

5,000 Ocean Tokens ($FET) that will be rewarded to the Top100 users in our leaderboard 🚀

📜Program Structure

Season 5 of the Ocean Zealy Community Campaign will feature more engaging tasks and activities, providing participants with opportunities to earn points. From onboarding tasks to Twitter engagement and content creation, there’s something for everyone to get involved in and earn points and rewards along the way.

⏰Campaign Duration: 30th of September 12:00 PM UTC

🤔How Can You Participate?

Follow this link to join and earn:

https://zealy.io/cw/onceaprotocol/questboard

Season 5 of the Ocean Zealy Community Campaign! was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Ontology

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital…

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital: Transparency Mark Cuban recently put out a challenge: he wants Trump supporters to name any startups backed by the former president that don’t involve a member of his family. This seemingly simple call-out actually exposes a far deeper issue in venture capital — one that could be solved through the power of
Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital: Transparency

Mark Cuban recently put out a challenge: he wants Trump supporters to name any startups backed by the former president that don’t involve a member of his family. This seemingly simple call-out actually exposes a far deeper issue in venture capital — one that could be solved through the power of blockchain and decentralized identities. And it’s about time someone connected the dots.

Think about it — venture capital is notoriously opaque. Most of the time, we have no idea which startups are getting funded, why certain VCs back certain founders, and what skeletons are hiding in the closets of high-profile investors. Even if someone like Trump has a rocky investment history, there’s no easy way to track it. Cuban’s challenge brings that to the forefront. If no one can name a successful Trump-backed startup, doesn’t that say something about how easily reputations in venture capital can be manipulated or shielded from scrutiny?

Now, let’s take this to the next level. What if we could bring all this on-chain? What if every venture capitalist’s track record — every investment, successful or otherwise — was tied to their decentralized identity and available for anyone to audit? Imagine a world where the power of blockchain is leveraged to not just remove middlemen, but to remove the smoke and mirrors surrounding investor reputations. Every deal, every failure, every win would be part of a permanent, transparent ledger. No more guesswork. No more empty claims. No more hiding behind family names or closed-door deals.

This concept is rooted in the heart of what Web3 promises: transparency, trust, and the ability for people to control their own data. By connecting VC histories to decentralized identities, startups would have a new tool in their arsenal — a way to verify the legitimacy and reliability of their potential investors. The days of VCs backing founders for a quick PR boost, only to ghost them when things get tough, would be over. It would empower the startup ecosystem with verifiable truth, and most importantly, accountability.

Let’s be real — venture capital needs this kind of overhaul. The recent scandals involving bad actors like Adam Neumann or the fallout from WeWork’s botched IPO are just reminders of the shady side of this industry. And don’t get me started on the “fake it till you make it” culture rampant in Silicon Valley, where founders and investors alike build smoke screens rather than sustainable businesses.In the future, blockchain and decentralized identities could make this all a thing of the past. And Ontology is leading the charge with its Decentralized Identity technology, which has the potential to create a new level of trust in these opaque markets. By offering zero-knowledge proofs and decentralized reputation systems, Ontology allows users to maintain privacy while still proving credibility. This is the solution that venture capital — and, frankly, business at large — has been waiting for.

Mark Cuban’s call for proof of Trump-backed startups may have been a jab, but it highlights something much more important. The VC world needs more transparency. Trump’s vague business reputation is just one example of how easily information can be spun, hidden, or hyped. With decentralized identity systems and reputation on-chain, we’d never have to ask these questions again. We’d know, without a doubt, who’s actually worth their salt.As we continue to develop Web3 technologies, let’s push for a world where investor reputations and venture capital histories are public, verifiable, and untouchable by spin. It’s time for the truth to come on-chain.

Interested in learning more about decentralized identities and how they can revolutionize transparency in venture capital? Explore Ontology’s decentralized identity solutions and see how we’re building the future of trust.

Mark Cuban’s Challenge to Trump Supporters Highlights a Bigger Problem in Venture Capital… was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


KuppingerCole

Nov 07, 2024: Overcoming the Challenges of MFA and a Passwordless Future

Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.
Securing user identities has become a crucial focus for organizations of all sizes. The evolution from traditional passwords to Multi-Factor Authentication (MFA) and eventually to passwordless solutions introduces various challenges, such as technical obstacles, changing threat landscapes, and resource limitations.

Oct 09, 2024: Adopting Passwordless Authentication

As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.
As businesses shift to more flexible work models, traditional password systems pose security risks and inefficiencies. The session will provide insights from recent KuppingerCole research, offering a comprehensive view of the evolving enterprise security landscape.

Ocean Protocol

Formula 1 Racing Challenge: 2024 Strategy Analysis

F1 :: 2024 Strategy Analysis Poster ‘The Formula 1 Racing Challenge’ challenges participants to analyze race strategies during the 2024 season. They will work with lap-by-lap data to assess how pit stop timing, tire selection, and stint management influence race performance. By conducting exploratory data analysis (EDA), they will identify relationships between these variables and generat
F1 :: 2024 Strategy Analysis Poster

‘The Formula 1 Racing Challenge’ challenges participants to analyze race strategies during the 2024 season. They will work with lap-by-lap data to assess how pit stop timing, tire selection, and stint management influence race performance. By conducting exploratory data analysis (EDA), they will identify relationships between these variables and generate insights on how strategy impacts race outcomes.

Participants will apply time series analysis, regression modeling, and multivariate techniques to track tire performance, analyze pit stop patterns, and model the effects of stint management on race pace. These methods will help them quantify how strategies evolve throughout a race and produce actionable insights for future Formula 1 strategies.

Objectives

Participants will explore the relationships between tire performance, pit stop frequency, and race outcomes. They will focus on how the number and timing of pit stops affect final race positions, using statistical methods like correlation analysis and regression to validate these relationships.

Participants will also analyze how different tire compounds influence lap times, calculate average lap times for each stint, and use time series analysis to track tire degradation. They will model how tire wear impacts lap times throughout the race, examining stint lengths and the performance of Soft, Medium, and Hard tire compounds.

Data

The dataset includes detailed lap-by-lap data for the 2024 Formula 1 season, capturing key variables such as lap times, tire compounds, pit stop timings, stint lengths, and race positions. Participants will analyze this data to explore how different factors influence race outcomes. They will assess tire performance by tracking how lap times change throughout stints, comparing the performance of Soft, Medium, and Hard compounds under varying race conditions. This analysis will allow participants to quantify how long each tire type can maintain optimal performance and how pit stop decisions align with tire wear.

The pit stop data provides precise timings, allowing participants to study the relationship between pit stop frequency, duration, and race position. By applying multivariate analysis, they will model how pit stops, tire degradation, and stint lengths affect race results. Regression models will help participants predict race outcomes based on the strategic choices made by teams, such as pit stop timing and the number of stints per tire compound.

Mission

The mission of this challenge is to develop a data-driven framework for analyzing race strategies in Formula 1. Participants will use EDA and statistical analysis to understand how tire management and pit stop decisions impact race outcomes. They will quantify these impacts by calculating lap times, identifying strategic patterns, and validating their findings with hypothesis testing.

Participants will also analyze how race length affects strategy. They will investigate whether longer races lead to more pit stops or different tire choices. Time series analysis will help them track strategy shifts during longer races and compare them across teams and drivers.

Rewards

The $10,000 prize pool will be distributed among the top 10 performers:

Prize pool rewards and point distribution

Participants will also earn points toward the 2024 championship. Accumulating points correlates with increased rewards, as seen in the 2023 Championship, where top performers received an additional $10 for each point earned throughout the year.

Opportunities

This challenge is not just about winning rewards, it’s about enhancing your skills in advanced data science techniques such as regression analysis, time series modeling, and clustering algorithms. By applying these techniques to real-world racing data, you’ll learn how to analyze complex datasets, identify patterns in race strategies, and derive actionable insights that inform competitive decision-making. This experience will prepare you for roles in sports analytics and other data-driven industries, equipping you with practical expertise in strategy analysis.

How to Participate

Are you ready to join us on this quest? Whether you’re a seasoned data pro or just starting, there’s a place for you in our vibrant community of data scientists. Let’s explore and discover together on Desights, our dedicated data challenge platform. The challenge runs from September 5 until September 24, 2024, 13:00 UTC. Click here to access the challenge and become part of our data science community.

Community and Support

To engage in discussions, ask questions, or join the community conversation, connect with us on Ocean’s Discord channel #data-science-hub, the Desights support channel #data-challenge-support.

About Ocean Protocol

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Follow Ocean on Twitter or Telegram to keep up to date. Chat directly with the Ocean community on Discord — or track Ocean’s progress on GitHub.

Formula 1 Racing Challenge: 2024 Strategy Analysis was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 08. September 2024

KuppingerCole

Now or Never: Successful Transition From SAP Identity Management

SAP has announced the end of life for its identity management (IDM) system, which is a key component in many traditional SAP environments. This poses a challenge for organizations running on-premises SAP systems. To plan for a smooth transition, organizations should consider key strategies such as taking the time for thorough planning, thinking about the future of their IAM, and analyzing requirem

SAP has announced the end of life for its identity management (IDM) system, which is a key component in many traditional SAP environments. This poses a challenge for organizations running on-premises SAP systems. To plan for a smooth transition, organizations should consider key strategies such as taking the time for thorough planning, thinking about the future of their IAM, and analyzing requirements before choosing a new solution.

The cost of implementation projects can be significant, but investing in proper preparation and tools upfront can save time and money in the long run. It is important to take a holistic view and consider the broader picture, including GRC and access governance solutions. Finding the right solution requires support from experts who understand the market and the organization's specific requirements.



Friday, 06. September 2024

Extrimian

DIDcon: Advances in Self-Sovereign Identity in Latin America

Introduction: DIDcon Identity Day The first edition of DIDcon gathered experts from various fields in Buenos Aires, Argentina, to explore how decentralized identity technology enhances security, privacy, and data interoperability in an increasingly digitalized world. Table of Contents What is Decentralized Identity? Self-Sovereign Identity (SSI) redefines the concept of digital identity by managin
Introduction: DIDcon Identity Day

The first edition of DIDcon gathered experts from various fields in Buenos Aires, Argentina, to explore how decentralized identity technology enhances security, privacy, and data interoperability in an increasingly digitalized world.

Table of Contents What is Decentralized Identity? Summary of Talks at DIDcon Welcome and Introduction Security and Decentralization The Future of Identity Trust Ecosystems: Use Cases Conclusion What is Decentralized Identity?

Self-Sovereign Identity (SSI) redefines the concept of digital identity by managing and storing information in a decentralized manner, using technologies like blockchain. This model allows individuals to control their personal information without relying on centralized intermediaries, significantly improving data security and privacy.

Summary of Talks at DIDcon Welcome and Introduction

https://www.os.city/Jesús Cepeda, CEO and co-founder of OS City, and Diego Fernández, Secretary of Innovation and Digital Transformation of GCBA, opened the event by emphasizing decentralized identity as an essential tool that returns control of information to users. They highlighted how this technology unlocks global economic potential and combats cybercrime and the frictions of less intuitive solutions. They also pointed to QuarkID as an innovative example of how Latin America is implementing decentralized identity to enhance citizen security and privacy.

Security and Decentralization

In this talk moderated by Alfonso Campenni, Pablo Sabbatella, security researcher at SEAL and founder of Defy Education, emphasized how scams and cybercrimes have become more sophisticated. To combat this, he discussed how decentralization is an interesting path that strengthens the protection and security of information.

During the recent digital security panel, Pablo Sabbatella, an expert in the field, shared valuable recommendations for protecting our identities and data online. He stressed the importance of adopting safe practices in the digital age, especially in the context of increasing cyberattacks and vulnerabilities in the applications we use daily.

Main Security Recommendations by Pablo Sabbatella: Avoid Repeating Passwords: It’s crucial to have unique passwords for each service to prevent cross-access in case of data breaches. Use Two-Factor Authentication (2FA): Adding a second level of security is crucial. It is recommended to use code-generating apps instead of SMS or emails, which are less secure. Be Cautious with Personal Data: It is vital to limit the personal information shared online and in applications, especially the phone number, which is a sensitive piece of data. Avoid Downloading Pirated Software: Unofficial programs and applications can contain malware and seriously compromise personal and financial security.

These guidelines not only increase individual security but also foster a culture of awareness about online safety, which is essential for navigating safely in today’s digital world.

He also mentioned new standards being built for the implementation of Account Abstraction through smart contracts, which enhance key management and user experience.

The Future of Identity

In a panel moderated by Pablo Mosquella of Extrimian, experts such as Guillermo Villanueva, CEO and co-founder of Extrimian, Matthias Broner, Head of Growth LATAM at ZKsync, Mateo Sauton from Worldcoin, and Pedro Alessandri, Undersecretary of Smart City, debated how decentralized identity is transforming the digital landscape, creating a safer, more private, scalable, and interoperable environment. They also discussed the positive impact of QuarkID and its rapid expansion across Latin America, underscoring its potential to strengthen digital trust in the region.

Trust Ecosystems: Use Cases

In this session moderated by Lucas Jolías from OS City and Fabio Budris, Advisor to the Secretary of Innovation of the City of Buenos Aires, concrete use cases of decentralized identity were presented in managing procedures in Salta, at the National Technological University (UTN), and in pilot tests for organ transplant management at INCUCAI. These examples clearly illustrated the tangible impact of these technologies in key sectors such as government, education, and health.

Conclusion

DIDcon – Identity Day underscored the transformative power of Decentralized Identity to revolutionize society and maximize value in the physical, digital, and hybrid worlds. Initiatives like QuarkID are driving Latin America toward a more secure and reliable digital future, overcoming barriers that have historically limited its technological potential.

The adoption of these technologies not only promises to improve security and privacy but is also building a solid digital trust ecosystem that will bring significant benefits to all the involved countries.

Keywords: decentralization, SSI, DID, VC, QuarkID, Extrimian, blockchain, trust, security, privacy, interoperability, technology, digital identity.

The post DIDcon: Advances in Self-Sovereign Identity in Latin America first appeared on Extrimian.


Tokeny Solutions

Amsterdam Teambuilding Fuels Our Mission for Open Finance

The post Amsterdam Teambuilding Fuels Our Mission for Open Finance appeared first on Tokeny.
May 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance

Greetings from Amsterdam! We hope you had a wonderful summer holiday.

Recently, our global team gathered in this dynamic city, not just to build a stronger bond, but to align our vision and drive our mission forward. As we explored the charming streets and iconic canals, we strengthened our commitment to transforming finance.

Our Vision: We see a future where finance is modern, efficient, and accessible—where assets move as quickly as texts, transfers are instant, and cross-platform interactions are seamless. This is the promise of open finance, built on DLT infrastructures.

Our Mission: Our mission is to empower institutions with a no-code solution T-REX Platform and proven API solution T-REX Engine, along with our expertise and ecosystem, to upgrade to open finance seamlessly.

The Market’s Moment: We are at a crucial point in the journey of tokenization. According to Gartner’s latest report, tokenization is currently in the ‘trough of disillusionment’ stage, with mainstream adoption expected in the next 2-5 years. For institutions, this means there is a narrow window of opportunity. To be fully prepared for the market shift (when more assets will be tokenized than paper-based assets), in just two years, institutions must begin building their tokenization capabilities now.

This requires a proactive transformation of operational models, including the integration of a robust onchain operating system. Solutions like those offered by Tokeny can play a critical role in facilitating this transformation.

The risk of inaction is significant. Institutions that delay will struggle to keep up, risking their market position and potentially losing clients to more forward-thinking competitors. The time to act is now, or risk being left behind as the market rapidly evolves.

Our Growth: Our team is expanding fast, with talented new members from Luxembourg, Madrid, Bangkok, Paris, Sarajevo, Zaragoza, and Barcelona. Each of them brings fresh perspectives and skills to deliver products that address our partners’ needs to maintain a competitive advantage with our solutions.

The Road Ahead: With over 120 successful use cases globally, 42 talented builders, and more than 3 billion blockchain events happened on our platform, we are ready to make history together with you. Our time in Amsterdam has made us stronger, more aligned, and more motivated than ever to make open finance a reality. The future is limitless.

We’ve also updated our vision and mission on our landing page to reflect our journey over the past 7 years. You can check it out here.

Tokeny Spotlight

PARTNERSHIP

ShipFinex and Tokeny revolutionize maritime asset tokenization.

Read More

CONTRIBUTION

CCO, Daniel Coheur, contributed to Zodia Custody’s report.

Read More

TEAM DAY

Our global team came together in the “gezellige” city of Amsterdam.

Read More

MILESTONE

We celebrate our LinkedIn page has reached 10,000 followers!

Read More

PRODUCT NEWSLETTER

We dive into demand for onchain services and why API’s are the key to lead.

Read More

INATBA

Discover our contribution to the tokenization section of the recent INATBA report.

Read More Tokeny Events

Token2049
September 18th-19th, 2024 | 🇸🇬 Singapore

Register Now

Mainnet
September 30th-October 2nd, 2024 | 🇺🇸 USA

Register Now

European Blockchain Convention
September 25th-26th, 2024 | 🇪🇸 Spain

Register Now

DAW London
October 2nd-3rd, 2024 | 🇬🇧 United Kingdom

Register Now ERC3643 Association Recap

 Bounty Challenge

Zama is organzaing another bounty challenge! To create a unique and confidential variant of the ERC-3643 security token standard using Zama’s fhEVM.

Read more

Subscribe Newsletter

A monthly newsletter designed to give you an overview of the key developments across the asset tokenization industry.

Previous Newsletter  Sep6 Amsterdam Teambuilding Fuels Our Mission for Open Finance May 2024 Amsterdam Teambuilding Fuels Our Mission for Open Finance Greetings from Amsterdam! We hope you had a wonderful summer holiday. Recently, our global team… Aug1 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption July 2024 Transaction Privacy: The Last Blocker for Massive Open Finance Adoption Open finance is a new approach to financial services, characterized by decentralization, open… Jun28 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules June 2024 Tokenized Securities Unaffected by MiCA, Utility Tokens and Stablecoins Face Stricter Rules As the EU’s Markets in Crypto Assets (MiCA) regulation is set… May22 Institutional RWA Tokenization Needs Permissioned Cash Coins May 2024 Institutional RWA Tokenization Needs Permissioned Cash Coins Stablecoins are the killer use cases for the crypto space, with a market cap exceeding $160…

The post Amsterdam Teambuilding Fuels Our Mission for Open Finance appeared first on Tokeny.


PingTalk

Policy Based Access Control (PBAC) Explained

Discover how Policy Based Access Control (PBAC) works, its benefits, and implementation steps tailored for financial services.

Traditional access control methods, such as role-based access control (RBAC) and attribute-based access control (ABAC), have built the foundation for securing systems and managing user access. 

 

However, they fail to provide the flexibility and enhanced security needed in today’s dynamic environment–especially for the financial services industry. As organizations navigate stringent compliance requirements and evolving security threats, they need a better alternative to make dynamic, context-aware access decisions–like policy-based access control (PBAC). 

 

Below, we’ll explore PBAC in further detail, how it compares to other models, and how it benefits the financial services industry.

Thursday, 05. September 2024

IdRamp

Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company

Account takeover (ATO) attacks have become a sophisticated and pervasive threat, with criminal organizations targeting businesses of all sizes and types. By gaining unauthorized access to company accounts, attackers can disrupt operations, steal sensitive data, and damage a company’s reputation. The post Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company first appeared on I

Account takeover (ATO) attacks have become a sophisticated and pervasive threat, with criminal organizations targeting businesses of all sizes and types. By gaining unauthorized access to company accounts, attackers can disrupt operations, steal sensitive data, and damage a company’s reputation.

The post Account Takeover Attack (ATO) Defense: A Guide to Protecting Your Company first appeared on Identity Verification Orchestration.

KuppingerCole

Authenticating Identities in the Age of AI: Strategies for Trustworthy Verification

In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With

In today's digital world, identity authenticity faces constant scrutiny, especially with the emergence of generative AI. However, modern tech provides innovative solutions. Chipped identity documents offer a trusted verification basis, embedding secure chips with verified data. Advancements like biometric authentication and blockchain-based verification ensure enhanced security and integrity. With these innovations, organizations can navigate identity verification confidently.

Join identity experts from KuppingerCole Analysts and InverID as they explore the pivotal role of chipped identity documents in reliable verification and their integration into eIDAS 2.0-compliant identity wallets. Discover strategies for establishing trust amidst faux realities, ensuring the integrity of digital identities.

Annie Bailey, Research Strategy Director at KuppingerCole Analysts, will discuss the implications of eIDAS 2.0 legislation and its impact on identity management. She will explain the concept of reusable verified identities and their significance in a multi-wallet ecosystem, as well as offer insights into preparing for a future with diverse credentials and the challenges it presents.

Wil Janssen, Co-founder and CRO of InverID, will explain the critical need for remote identity verification in today's digital landscape. He will illustrate how to leverage government-issued identity documents for secure verification, as well as highlight the importance of identity verification services in EU Wallets and beyond.




auth0

External User Verification with Forms

Learn how to leverage Auth0 Forms to implement an invitation code workflow and improve the onboarding of your SaaS users.
Learn how to leverage Auth0 Forms to implement an invitation code workflow and improve the onboarding of your SaaS users.

Evernym

Ensuring Compliance with Regulatory Requirements in Digital Security

Ensuring Compliance with Regulatory Requirements in Digital Security In an increasingly regulated world, ensuring compliance with... The post Ensuring Compliance with Regulatory Requirements in Digital Security appeared first on Evernym.

Ensuring Compliance with Regulatory Requirements in Digital Security In an increasingly regulated world, ensuring compliance with digital security requirements is crucial for organizations of all sizes. Regulations and standards are designed to protect sensitive data, ensure privacy, and enhance the overall security of digital systems. However, navigating these requirements can be ...

The post Ensuring Compliance with Regulatory Requirements in Digital Security appeared first on Evernym.


Elliptic

Crypto regulatory affairs: Hong Kong kicks off tokenization sandbox with major institutional players

Hong Kong has taken yet another important step to bolster its position as a leader in the Asia-Pacific region for well-regulated cryptoasset and blockchain innovation. 

Hong Kong has taken yet another important step to bolster its position as a leader in the Asia-Pacific region for well-regulated cryptoasset and blockchain innovation. 


Ocean Protocol

DF105 Completes and DF106 Launches

Predictoor DF105 rewards available. DF106 runs Sept 5— Sept 12, 2024 1. Overview Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor. Data Farming Round 105 (DF105) has completed. DF106 is live today, Sept 5. It concludes on September 12. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE 
Predictoor DF105 rewards available. DF106 runs Sept 5— Sept 12, 2024 1. Overview

Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by making predictions via Ocean Predictoor.

Data Farming Round 105 (DF105) has completed.

DF106 is live today, Sept 5. It concludes on September 12. For this DF round, Predictoor DF has 37,500 OCEAN rewards and 20,000 ROSE rewards.

2. DF structure

The reward structure for DF106 is comprised solely of Predictoor DF rewards.

Predictoor DF: Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.

3. How to Earn Rewards, and Claim Them

Predictoor DF: To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.

4. Specific Parameters for DF106

Budget. Predictoor DF: 37.5K OCEAN + 20K ROSE

Networks. Predictoor DF applies to activity on Oasis Sapphire. Here is more information about Ocean deployments to networks.

Predictoor DF rewards are calculated as follows:

First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards. Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.

Expect further evolution in DF: adding new streams and budget adjustments among streams.

Updates are always announced at the beginning of a round, if not sooner.

About Ocean, DF and Predictoor

Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on Twitter or TG, and chat in Discord. Ocean is part of the Artificial Superintelligence Alliance.

In Predictoor, people run AI-powered prediction bots or trading bots on crypto price feeds to earn $. Follow Predictoor on Twitter.

DF105 Completes and DF106 Launches was originally published in Ocean Protocol on Medium, where people are continuing the conversation by highlighting and responding to this story.


Okta

Elevate Access Token Security by Demonstrating Proof-of-Possession

We use access tokens to request data and perform actions within our software systems. The client application sends a bearer token to the resource server. The resource server checks the validity of the access token before acting upon the HTTP request. What happens if the requesting party is malicious, steals your token, and makes a fraudulent API call? Would the resource server honor the HTTP reque

We use access tokens to request data and perform actions within our software systems. The client application sends a bearer token to the resource server. The resource server checks the validity of the access token before acting upon the HTTP request. What happens if the requesting party is malicious, steals your token, and makes a fraudulent API call? Would the resource server honor the HTTP request? If you use a bearer token, the answer is “yes.”

My teammate wrote that an access token is like a hotel room keycard. If you have a valid keycard, anyone can use it to access the room. If you have a valid access token, anyone can use it to access a resource server.

7 Ways an OAuth Access Token is like a Hotel Key Card

Learn 7 things OAuth 2.0 access tokens have in common with a hotel key card.

Aaron Parecki

Bearer tokens (and static API keys) mean whoever presents the valid token to the resource server has access, which makes the token powerful and vulnerable. We can look at high-profile token thefts to see how prevalent and disastrous token theft is, so we want to ensure our applications aren’t vulnerable to similar attacks.

To protect tokens, we incorporate secure coding techniques into our apps, configure a quick expiration time on the token, and ensure only requests sent to allowed origins include the access token. Still, token attacks pose a risk to highly sensitive resources. What more can we do to secure requests?

This post describes a new OAuth 2.0 spec supported by Okta that makes access tokens less prone to misuse and helps mitigate security risks. If you want to refresh your OAuth knowledge, check out What the heck is OAuth.

Table of Contents

Bind OAuth 2.0 access tokens to client applications Demonstrate proof of possession (DPoP) using JWTs Incorporating DPoP into OAuth 2.0 token requests Use DPoP-bound access tokens in HTTP requests Extend the DPoP flow with an enhanced security handshake Validate DPoP requests in the resource server Learn more about OAuth 2.0, Demonstrating Proof-of-Possession, and secure token practices Bind OAuth 2.0 access tokens to client applications

If we go back to the hotel keycard analogy, we want a hotel keycard that only you can use and that links you as the rightful user of the hotel keycard.

In the OAuth world, ideally, we want to link the authorization server, the client, and the access token and limit token use to the client. In OAuth terminology, the sender and client application are the same entity. By linking these entities, external parties can’t misuse the access token.

OAuth 2.0 defines a few methods to bind access tokens.

🤐 Client secret
Confidential clients are applications running in a protected environment where user authentication and token storage occur within backend servers, such as traditional server-rendered web applications. Confidential clients can use a secret value known to the requestor (the client application requesting the tokens) and the authorization server as part of HTTP requests. The client secret is a long-lived value generated by the authorization server. However, malicious parties who steal the secret can use it. 🌐 Mutual TLS Client Authentication and Certificate-Bound Access Tokens (mTLS)
Mutual authentication means parties at the ends of the network connection identify themselves using a combination of asymmetric encryption and TLS certificate as part of the HTTP request. mTLS is a highly secure method for confidential clients but can be complex to implement and maintain. 🔒 Private key JSON Web Token (JWT)
Machine-to-machine HTTP requests don’t have user context. The requesting service often uses a combination of an ID and secret using the Basic authorization scheme when making HTTP calls, but doing so isn’t secure. Private key JWTs offer a more secure approach. The requesting service uses asymmetric encryption to sign any JWTs it creates.

These methods apply only to confidential clients that can maintain secrets, not to public clients.

Public clients are apps that run authentication code within the user’s hardware, such as in Single-Page Applications (SPA) and mobile clients. Software applications use public client architecture but contain avenues for token security exploits without careful protection. Is there an alternative that works for confidential and public clients without incurring costly implementation and maintenance?

Demonstrate proof of possession (DPoP) using JWTs

There’s now a solution for all client types calling sensitive resources! The IETF published a new extension to OAuth 2.0: Demonstrating Proof of Possession (DPoP), targeted primarily for public client use. You may have heard of this idea before, as the concept has been around for a while. With a published spec, it’s now official, standardized, and supported!

The client and authorization server work together to generate tokens with proof of possession.

The client creates non-repudiable proof of ownership using asymmetric encryption The authorization server uses this proof when generating the token

How is this different from earlier methods that bind the caller to the access token? The big difference is this method happens at runtime across any client type. Confidential clients have cryptographic libraries supporting public/private key encryption, but a gap exists for public clients. Thanks to enhanced browser API capabilities such as the Web Crypto API and SubtleCrypto, modern browser-based JavaScript apps can also use DPoP.

🚨 You must protect the client from Cross-Site Scripting (XSS) and Remote File Inclusion (RFI) attacks to prevent exfiltration or unauthorized use of the keyset. 🚨

Store the keys in a storage format that someone can’t export and guard the app against attacks where an attacker’s code can run in the user’s context. Use up-to-date secure SPA frameworks, employ defensive coding practices, and add appropriate Content Security Policies (CSP) to protect the client. Apply secure header best practices and consider using the Trusted Types API if you can limit end-user browser usage to browsers that support it.

⚠️ Note

We will investigate DPoP proofs and inspect how the client constructs them. However, despite this knowledge, you should always use Okta SDKs or a vetted, well-maintained library with built-in DPoP support when making requests using DPoP.

Incorporating DPoP into OAuth 2.0 token requests

When using DPoP, the client creates a “proof” using asymmetric encryption. The proof is a JWT, which includes the URI, the HTTP method of the request, and the public key. The client application requests tokens from the authorization server and includes the proof as part of the request. The authorization server binds a public key hash and the HTTP request information from the proof within the access token it returns to the client. This means the access token is only valid for the specific HTTP request.

A sequence diagram for the OAuth 2.0 Authorization Code flow with DPoP looks like this:

The proof contains metadata proving the sender and ways to limit unauthorized use by limiting the HTTP request, the validity window, and reuse. If you inspect a decoded DPoP proof JWT, you’ll see the header contains information proving the sender:

The typ claim set to dpop+jwt The public/private key encryption algorithm The public key in JSON Web Key (JWK) format

Inspecting the decoded proof’s payload shows claims that limit unauthorized use, such as:

HTTP request info including the URI and HTTP method (such as /oauth2/v1/token and POST) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks

Let’s inspect the /token request a little further. When making the request, the client adds the proof in the header. The rest of the request, including the grant type and the code itself, remains the same for the Authorization Code flow.

POST /oauth2/v1/token HTTP/1.1 DPoP: eyJ0eXAiOiJkcG9w.....H8-u9gaK2-oIj8ipg Accept: application/json Content-Type: application/x-www-form-urlencoded grant_type=authorization_code code=XGa_U6toXP0Rvc.....SnHO6bxX0ikK1ss-nA

The authorization server decodes the proof and incorporates properties from the JWT into the access token. The authorization server responds to the /token request with the token and explicitly sets the response header to state the token type as DPoP.

HTTP/1.1 200 OK Content-Type: application/json { "access_token":"eyJhbG1NiIsPOk.....6yJV_adQssw5c", "token_type":"DPoP", "expires_in":3600, "refresh_token":"5PybPBQRBKy2cwbPtko0aqiX" }

You now have a DPoP type access token with a possession proof. What changes when requesting resources?

Use DPoP-bound access tokens in HTTP requests

DPoP tokens are no longer bearer tokens; the token is now “sender-constrained.” The sender, the client application calling the resource server, must have both the access token and a valid proof, which requires the private key held by the client. This means malicious sorts need both pieces of information to impersonate calls into the server. The spec builds in constraints even if a malicious sort steals the token and the proof. The proof limits the call to a unique request for the URI and method within a validity window. Plus, your application system still has the defensive web security measures applicable to all web apps, preventing the leaking of sensitive data such as tokens and keysets.

The client generates a new proof for each HTTP request and adds a new property, a hash of the access token. The hash further binds the proof to the access token itself, adding another layer of sender constraint. The proof’s payload now includes:

HTTP request info including the URI and HTTP method (such as https://{yourResourceServer}/resource and GET) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks Hash of the access token

Clients request resources by sending the access token in the Authorization header, along with proof demonstrating they’re the legitimate holders of the access token to resource servers using a new scheme, DPoP. HTTP requests to the resource server change to

GET https://{yourResourceServer}/resource HTTP/1.1 Accept: application/json Authorization: DPop eyJhbG1NiIsPOk.....6yJV_adQssw5c DPoP: eyJhbGciOiJIUzI1.....-DZQ1NI8V-OG4g

The resource server verifies the validity of the access token and the proof before responding with the requested resource.

Extend the DPoP flow with an enhanced security handshake

DPoP optionally defines an enhanced handshake mechanism for calls requiring extra security measures. The client could sneakily create proofs for future use by setting the issued time in advance, but the authorization and resource servers can wield their weapon, the nonce. The nonce is an opaque value the server creates to limit the request’s lifetime. If the client makes a high-security request, the authorization or resource server may issue a nonce that the client incorporates within the proof. Doing so binds the specific request and time of the request to the server.

An example of a highly secure request is when making the initial token request. Okta follows this pattern. Different industries may apply guidance and rules for the types of resource server requests requiring a nonce. Since the enhancement requires an extra HTTP request, use it minimally.

When the authorization server’s /token request requires a nonce, the server rejects the request and returns an error. The response includes a new header type, DPoP-Nonce, with the nonce value, and a new standard error message, use_dpop_nonce. The flow for requesting tokens now looks like this:

Let’s look at the HTTP response from the authorization and resource servers requiring a nonce. The authorization server responds to the initial token request with a 400 Bad Request and the needed nonce and error information.

HTTP/1.1 400 Bad Request DPoP-Nonce: server-generated-nonce-value { "error": "use_dpop_nonce", "error_description": "Authorization server requires nonce in DPoP proof" }

When the resource server requires a nonce, the response changes. The resource server returns a 401 Unauthorized with the DPoP-Nonce header and a WWW-Authenticate header containing the use_dpop_nonce error message.

HTTP/1.1 401 Unauthorized DPoP-Nonce: server-generated-nonce-value WWW-Authenticate: error="use_dpop_nonce", error_description="Resource server requires nonce in DPoP proof"

We want that resource, so it’s time for a new proof! The client reacts to the error and generates a new proof with the following info in the payload:

HTTP request info including the URI and HTTP method (such as https://{yourResourceServer}/resource and GET) Issue time to limit the validity window for the proof An identifier that’s unique within the validity window to mitigate replay attacks The server-provided nonce value Hash of the access token

With this new proof, the client can remake the request.

Validate DPoP requests in the resource server

Okta’s API resources support DPoP-enabled requests. If you want to add DPoP support to your own resource server, you must validate the request. You’ll decode the proof to verify the properties in the header and payload sections of the JWT. You’ll also need to verify properties within the access token. OAuth 2.0 access tokens can be opaque, so use your authorization server’s /introspect endpoint to get token properties. Okta’s API security guide, Configure OAuth 2.0 Demonstrating Proof-of-Possession has a step-by-step guide on validating DPoP tokens, but you should use a well-maintained and vetted OAuth 2.0 library to do this for you instead. Finally, enforce any application-defined access control measures before returning a response.

Learn more about OAuth 2.0, Demonstrating Proof-of-Possession, and secure token practices

I hope this intro to sender-constrained tokens is helpful and inspires you to use DPoP to elevate token security! Watch for more content about DPoP, including hands-on experimentation and code projects. If you found this post interesting, you may also like these resources:

Secure OAuth 2.0 Access Tokens with Proofs of Possession Why You Should Migrate to OAuth 2.0 From Static API Tokens How to Secure the SaaS Apps of the Future Step-up Authentication in Modern Application OAuth 2.0 Security Enhancements Add Step-up Authentication Using Angular and NestJS

Remember to follow us on Twitter and subscribe to our YouTube channel for more exciting content. We also want to hear from you about topics you want to see and questions you may have. Leave us a comment below!


PingTalk

Strong Customer Authentication & Compliance Under PSD2

Understand Strong Customer Authentication (SCA) and PSD2 compliance. Learn about requirements, best practices, and exemptions.

In the physical world, it’s relatively straightforward for banks, credit card issuers, and other institutions to verify a customer’s identity with a valid ID before they can access their accounts. But, when it comes to securing online accounts and payment services, it isn’t as cut and dry. 

 

Single-factor authentication methods, like password-based security, no longer suffice in the modern landscape. It’s now the standard to use multiple authentication factors to ensure customers are who they claim to be. 

 

Global regulators have taken notice of the rising threats to online account security and passed legislation to standardize and strengthen authentication requirements in the financial sector. This includes the introduction of strong customer authentication (SCA) requirements, which are now enforced throughout the EU and the UK.

 

Below, we’ll cover strong customer authentication, the SCA requirements set out in PSD2, and what to expect from new legislation PSD3 and PSR1. 

Wednesday, 04. September 2024

Spherical Cow Consulting

Why FIPS 140-3 Matters for Cryptography and Digital Identity Security

Cryptography is all about securing communications. Authentication, key exchange, token signing, digital signatures, zero-knowledge proofs, and so much more depend on cryptographic algorithms that no mere mortal (by which I mean me) will ever understand. The good news is that mere mortals do not need to understand these algorithms. Governments have the resources to truly… Continue reading Why FIPS

Cryptography is all about securing communications. Authentication, key exchange, token signing, digital signatures, zero-knowledge proofs, and so much more depend on cryptographic algorithms that no mere mortal (by which I mean me) will ever understand. The good news is that mere mortals do not need to understand these algorithms. Governments have the resources to truly dig into these algorithms and determine whether they are as secure and effective as intended. In the U.S., something called FIPS 140 sits at the heart of determining whether a cryptographic module—the actual hardware or software implementing these algorithms—is secure enough.

FIPS 140-3 is the latest iteration of the U.S. Federal Information Processing Standard (FIPS) that specifies the security requirements for cryptographic modules used by federal agencies and other organizations to protect sensitive information. If you have a cybersecurity company that does business with the U.S. Government, then you care about FIPS 140-3. If you don’t have a cybersecurity company but buy cybersecurity tools, knowing that the cryptographic modules they use to secure your data meet the FIPS 140-3 standards is a Very Good Thing.

If you aren’t involved in tech purchasing decisions for your company, this post will serve as interesting trivia for you to wow your geeky friends with over beverages. Apologies in advance for all the acronyms; they can’t be avoided if you’re in the world of tech.

Definitions

First, let’s get a few definitions out there:

Cryptography: Refers to the broader field of securing communications through mathematical techniques. Cryptographic Algorithm: A specific method or procedure, like AES or RSA, used within the field of cryptography to encrypt or decrypt data, sign messages, or generate keys. Cryptographic Module: A hardware or software component that implements cryptographic algorithms and provides secure services like encryption, decryption, authentication, or key management. FIPS 140

The first FIPS 140 was published thirty years ago (where has time gone???). The U.S. federal government realized it needed to get a handle on how the government as a whole needed to use cryptographic modules in its tech. Prior to that, it was something of a free-for-all. Each agency made its own decisions about what information and staff it had on hand. Not great.

The best thing about version 1 of anything is that it suddenly sparks all SORTS of discussion. There are new requirements, positive and negative feedback, and a desire to improve. That resulted in FIPS 140-2, published over 20 years ago in 2001. (I’m still feeling old here.) FIPS 140-2 provided clearer definitions and more detailed requirements. Just as well as the science of cryptography advanced and new cryptographic algorithms needed to be considered.

The U.S. Government obviously isn’t the only entity out there working out the best way to evaluate cryptographic algorithms. That’s where the International Organization for Standardization (ISO) came in. In 2012, ISO published ISO/IEC 19790:2012, “Information technology — Security techniques — Security requirements for cryptographic modules.” The U.S. National Institute of Standards and Technology (NIST) was a member of the team making that global standard. As it came time to yet again refresh FIPS 140, it made sense to point it to ISO/IEC 19790:2012. That’s now FIPS 140-3.

Cryptographic Module Validation Program (CVMP)

So now there’s a standard, updated over time, that says, “Here are the requirements for cryptographic modules to be used by the federal government.” Great! How does the government ensure that those modules meet those requirements? That’s where the Cryptographic Module Validation Program (CVMP) comes in.

The CVMP is a joint effort between the NIST and the Canadian Centre for Cyber Security. It provides guidelines for accredited laboratories (Cryptographic and Security Testing Laboratories (CSTL). From those guidelines, the laboratories verify that a cryptographic module submitted by a vendor satisfies the requirements. The CSTL’s findings are submitted back to the program. If everything is copacetic, the module is added to the list of modules federal agencies can accept in their tools and services.  

FIPS 140, the CVMP, and Digital Identity

So, how does this all tie into the world of digital identity? I have a list!

There are two things in particular to remember. First, of course, is noting that cryptography is used in a variety of ways when it comes to digital identity. Encrypting tokens, signatures, keys, and more is a fundamental necessity. Second, the federal government spends a mind-boggling amount on cybersecurity. This means their requirements for cybersecurity—such as the cryptographic modules used in the tools and services they purchase—influence almost everything in the cybersecurity industry. While following the FIPS 140 guidelines is only _required_ for federal agencies, in practice, its reach is much broader.

Given those points, FIPS 140-3 helps lay the groundwork for secure digital identity by ensuring that the cryptographic modules used are not just good, but government-approved good. And if that isn’t enough, given that FIPS 140-3 now basically points to an internationally developed standard in the form of ISO/IEC 19790:2012, then you’re talking about something that has achieved consensus on a global scale. That’s a level of assurance that goes beyond just checking a box. It’s knowing that the systems managing your identity are backed by some of the best cryptographic practices in the world.

Wrap Up

As a regular consumer, you really don’t need to know about FIPS 140 and its associated validation program. As a cybersecurity practitioner, you should at least be aware that it’s there and its implications. And as an executive that has responsibility for the security of your company or what goes into your products, all of this should be familiar to you already.

This is going to be an area I learn more about over the next few months. And since I learn best through writing, you can expect more blog posts on the topic of how the U.S. Government thinks about cryptographic modules. Stay tuned!

I want to help you go from overwhelmed at the rapid pace of change in identity-related standards to prepared to strategically invest in the critical standards for your business. Follow me on LinkedIn or reach out to discuss my Digital Identity Standards Development Services.

The post Why FIPS 140-3 Matters for Cryptography and Digital Identity Security appeared first on Spherical Cow Consulting.


KuppingerCole

Oct 15, 2024: A False Sense of Security: Authentication Myths That Put Your Company at Risk 

In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.
In today's digital landscape, organizations often fall prey to a false sense of security, particularly concerning authentication practices. Misconceptions about identity security can leave companies vulnerable to evolving threats, potentially compromising sensitive data and systems. Understanding the realities behind these myths is crucial for developing robust authentication strategies.

Ontology

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms In today’s digital landscape, the rapid pace of technological innovation has brought us to a crossroads, where the ideals of privacy, autonomy, and freedom meet the very real challenges of regulation. While decentralized platforms promise a world free from the prying eyes of governments and corporations,
Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms

In today’s digital landscape, the rapid pace of technological innovation has brought us to a crossroads, where the ideals of privacy, autonomy, and freedom meet the very real challenges of regulation. While decentralized platforms promise a world free from the prying eyes of governments and corporations, they also pose significant challenges, particularly when they are used to facilitate illegal activities. Take, for example, the infamous cases of Silk Road, Tornado Cash, and Telegram — each a flashpoint in the ongoing battle between technological freedom and the need for regulation. But what if there were a way to strike a balance? A decentralized reputation system, paired with anonymous identities, could offer a middle ground, where freedom meets responsibility.

The Evolution of Privacy Platforms: Case Studies Silk Road: The Dark Web’s Pioneer

Silk Road was more than just an online black market; it was the first glimpse into a future where decentralized platforms could operate outside the reach of traditional law enforcement. Founded by Ross Ulbricht in 2011, Silk Road leveraged Bitcoin and the Tor network to create a truly global, anonymous marketplace. It was a hub for illegal activities — primarily drug trafficking — hidden from the watchful eyes of the law. The importance of Silk Road lies not just in its role as a market but in how it demonstrated the power of cryptocurrencies and decentralized platforms. It set a precedent, showing how these technologies could facilitate both freedom and crime on a massive scale.

Tornado Cash: Anonymizing Cryptocurrency Transactions

Tornado Cash pushed the boundaries of financial privacy. This cryptocurrency mixer on the Ethereum blockchain provided users with the tools to anonymize their transactions, protecting their financial data from surveillance. But with great power comes great responsibility — or, in this case, irresponsibility. Tornado Cash became a haven for money laundering, exploited by criminals and even North Korean hackers. The arrest of Tornado Cash developer Alexey Pertsev by Dutch authorities in August 2022 sparked a heated debate about the balance between privacy and security, and whether developers should be held accountable for the misuse of their creations.

Telegram: A Platform for Secure Communication

Telegram’s commitment to privacy and encryption has made it the go-to app for nearly 1 billion users seeking secure communication. From activists to journalists, many rely on Telegram to protect their privacy in the face of government surveillance. However, while Telegram is not decentralized, its strong encryption and anonymity features have also made it attractive to criminal organizations, coordinating everything from drug trafficking to child exploitation. The recent arrest of Telegram’s CEO, Pavel Durov, in France has intensified the debate about the role of tech platforms in moderating content and their accountability for illegal activities.

The Regulatory Response: Challenges and Consequences

The arrests of figures like Ulbricht, Pertsev, and Durov are part of a broader governmental push to regulate decentralized and privacy-focused platforms. But this raises some tough questions: Are we stifling innovation and free speech in the process? The legal complexities of regulating these platforms, especially when it comes to holding developers accountable, highlight the difficulty in balancing privacy with security.

Proposed Solution: Decentralized Identity and Reputation Systems

So, how do we move forward? One potential solution lies in the development of decentralized reputation systems paired with anonymous identities. Imagine a world where users can maintain their privacy while building a reputation based on their actions within the community. Such a system could empower communities to self-regulate, reducing the need for external oversight.

Anonymous Identity Systems

Anonymous identity systems could be the key to balancing privacy with accountability. These systems would allow users to engage with decentralized platforms without revealing their true identities, while still being held accountable for their actions.

Decentralized Reputation Systems

A decentralized reputation system could serve as a form of self-regulation. Users would build reputations based on their behavior, with ethical actions rewarded and illegal activities flagged or excluded. This could mitigate the need for heavy-handed regulation while preserving the core values of decentralization.

Practical Considerations and Challenges

Of course, implementing such systems won’t be without challenges. From technical limitations to potential exploitation, these solutions require careful design and community buy-in. But with transparency and engagement, we could create a system that balances freedom with responsibility.

Conclusion

The stories of Silk Road, Tornado Cash, and Telegram underscore the dual-edged sword of privacy-focused technology. While these platforms offer unprecedented privacy and autonomy, they also create new avenues for crime. A balanced approach, using decentralized reputation systems and anonymous identities, could offer a path forward. As we continue to navigate the digital age, it’s essential that we foster dialogue between innovators, regulators, and users to ensure that technology serves the greater good, protecting both freedom and security in this brave new world.

Decentralized Identity and Reputation: Balancing Freedom and Regulation in Digital Platforms was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


BlueSky

Bem Vindos ao Bluesky!

Que semana! Nos últimos dias mais de 2.6 milhões de usuários se registraram na plataforma, sendo que mais de 85% são Brasileiros. Sejam muito bem vindos, estamos muito contentes por tê-los aqui!

Que semana! Nos últimos dias mais de 2.6 milhões de usuários se registraram na plataforma, sendo que mais de 85% são Brasileiros. Sejam muito bem vindos, estamos muito contentes por tê-los aqui!

Qual o diferencial do Bluesky?

Por base, o Bluesky te prioriza e te dá mais controle. Aqui você pode escolher a experiência social que mais te agrada.

Nossa comunidade cresceu organicamente e está cheia de autores, artistas, jornalistas, políticos, entre outros. Os usuários brasileiros que já usam a plataforma notam que eles têm uma qualidade de engajamento com muito mais qualidade do que em qualquer outra plataforma.

Além disso, o Bluesky é um ecossistema aberto. Nós criamos uma rede social aberta para que qualquer desenvolvedor seja capaz de modificá-la através do AT Protocol (o nome dele é Atmosphere). Essa abertura diz respeito ao fato do Bluesky ser um projeto colaborativo, diferente de outras redes sociais que são controladas por uma única empresa. Qualquer um pode construir feeds, moderar e até criar aplicativos completamente novos usando a nossa plataforma.

Quando o Bluesky vai liberar vídeos e trending topics?

Os vídeos já estarão disponíveis na nossa próxima grande atualização, e nós também já estamos trabalhando nos trending topics. Estamos ligados nos feedbacks de vocês e super contentes com essa animação.

Quais são as particularidades do Bluesky? Feeds Customizados

Além do cronológico feed Seguindo e o clássico Discover, você pode experimentar feeds novos! Por exemplo, caso você queira ver postagens dos seus amigos que não postam muito — tente Quiet Posters. Caso queira ver o conteúdo mais postado no dia anterior em toda a plataforma, experimente o Catch Up.

Qualquer um pode criar e se inscrever nos feeds. Ao invés de providenciarmos apenas um algoritmo, nós deixamos nossos usuários escolherem. Você está no controle. Dessa forma a ideia é promover discussões mais saudáveis mesmo porque não incentivamos esquemas para aumentar engajamento, desinformação, fake news ou qualquer tipo de abuso.

Nomes de Usuário

Caso você seja dono de um site, pode usar o nome dele como nome de usuário. Por exemplo, a Folha de S. Paulo escolheu usar @folha.com como nome. Ah, mas não esqueça que você só pode usar o nome de usuário de um site que seja seu, pois essa é uma forma de mostrar que, por exemplo, você é a Folha de S. Paulo real. Esse é um jeito de se provar legítimo.

Você pode brincar e se divertir inventando! Por exemplo, muitas Swifties escolheram nomes de usuário que terminam com “swifties.social,” coisa que você pode configurar usando essa ferramenta aqui.

Caso tenha interesse em comprar e gerenciar um site através do nosso parceiro Namecheap, você pode fazer isso aqui.

Estou cansado de criar contas novas em redes sociais! É garantia que o Bluesky vai continuar vivo?

Nós sabemos, compreendemos profundamente essa sua preocupação. Mas o Bluesky está aqui para ficar.

Quando uma plataforma como o X fecha você perde contato com todos os seus amigos de lá. Mas como o Bluesky é uma rede cujo código é aberto você consegue levar seus seguidores com você. Dessa forma você sempre será capaz de manter contato com seus amigos. (caso esteja interessado em detalhes técnicos, tem mais informações sobre a portabilidade de contas aqui.)

E digo mais! Por ser uma rede social aberta, desenvolvedores independentes podem construir aplicativos inteiramente novos e promover outras experiências a vocês. Imagine uma plataforma de blog ou um aplicativo de fotos nessa mesma rede com todos os seus amigos já conectados. Você não vai precisar se inscrever em outro aplicativo social desta vez — estará criando uma identidade social online que é apenas sua.

Como o Bluesky lida com liberdade de expressão e moderação de conteúdo?

Segurança e promoção de espaços saudáveis para conversas é uma questão central para o Bluesky. Nosso time de moderação está de pé 24/7 e consegue responder a maioria das denúncias em poucos dias. Para denunciar uma postagem ou uma conta, simplesmente clique no menu indicado com três pontinhos e em “Denunciar Postagem” ou “Denúnciar Conta”.

Ao mesmo tempo, nós entendemos que não existe uma única forma que sirva para moderar todos os espaços. Então, além da base sólida em relação às políticas de moderação do Bluesky, vocês podem se inscrever em outras organizações que confiem, ou mesmo em outras comunidades que tenham algum conhecimento específico e que podem adicionar regras de moderação. (Leia mais sobre formas de acrescentar regras de moderação aqui.)

Como vocês planejam lidar com as fake news eleitorais?

Aaron Rodericks, o cabeça do time de segurança e promoção de saúde na plataforma, já teve que lidar com essas questões no Twitter e trouxe sua experiência para cá. Nosso time de moderação revisa o conteúdo ou a conta em busca de desinformação, coisa que os usuários podem denunciar diretamente do aplicativo. Em caso de violações severas como risco de boicotes a votação ou as eleições oficiais nós poderemos remover o conteúdo ou até a conta. Na maioria dos casos nós revisamos as reivindicações de que um conteúdo é falso buscando informações em fontes confiáveis e nos reservamos o direito de classificar postagens como desinformação.

Jornalistas podem entrar em contato através do press@blueskyweb.xyz. Para o nosso kit de media, que é onde você encontra nosso logo e fotos, clique aqui.


Welcome to Bluesky!

What a week! In the last few days, Bluesky has grown by more than 2.6 million users, over 85% of which are Brazilian. Welcome, we are so excited to have you here!

What a week! In the last few days, Bluesky has grown by more than 2.6 million users, over 85% of which are Brazilian. Welcome, we are so excited to have you here!

What makes Bluesky different?

By design, Bluesky gives users more control and prioritizes you. Here, you can customize your social experience to fit you.

Our community has grown organically, and is full of creators, artists, journalists, politicians, and more. Brazilian users on Bluesky have noticed that they receive much higher quality engagement on Bluesky than on any other platform.

In addition, Bluesky is an open ecosystem. We’re built on an open network that developers can freely build upon called the AT Protocol (and the ecosystem is called the Atmosphere). This openness means that Bluesky is a collaborative project, unlike other social networks that are controlled by a single company. Anyone can build feeds, moderation services, and even entirely new apps on top of our network.

What are some unique features on Bluesky? Custom Feeds

Outside of your chronological Following feed and the default Discover feed, you can try out some new feeds! Maybe you want to see posts from your friends who don’t post as often — try Quiet Posters. If you want to see the top posts across the whole network from the last day, try Catch Up.

Anyone can create and subscribe to feeds. Instead of providing only a single algorithm, we let users choose. You’re in control. This promotes healthier discussion because we do not incentivize engagement baiting, misinformation, or harassment.

Usernames

You can set your username to be a website that you own. For example, Folha de S. Paulo set their Bluesky username to @folha.com. You can only set your username to a website that you own, so this shows you that the real Folha de S. Paulo owns this account. It’s one form of self-verification.

There’s lots of room to have fun with this! For example, many Swifties are using usernames that end in “swifties.social,” which you can set up with this community tool here.

If you’d like to purchase and manage a website through Bluesky’s partnership with Namecheap, you can do that here.

I’m tired of creating accounts on new social apps! Will Bluesky stick around?

We know, we’ve been there too. Bluesky is here to stay.

When an app like X shuts down, you lose touch with all your friends there. But because Bluesky is built on an open network, you can easily take your followers with you. You will always be able to stay in touch with your friends. (If you’re interested in the technical details, you can read more about account portability here.)

Additionally, because of the open network, independent developers can build entirely new apps and experiences. Imagine a blogging platform or a photo app built on this same network, with all of your friends already connected. You’re not just signing up for another social app this time — you’re creating a social identity online that you own.

When will Bluesky have video and trending topics?

Video will be available in the next major app release, and we’re working on trending topics too. We’re paying close attention to your feedback and appreciate everyone’s excitement.

How does Bluesky handle content moderation?

Trust and safety is core to Bluesky, and we value spaces for healthy conversation. Our moderation team provides 24/7 coverage and responds to most reports within a few days. To report a post or an account, simply click the three-dot menu and click “Report post” or “Report account.”

At the same time, we recognize that there’s no one-size-fits-all approach to moderation. So, on top of Bluesky's strong foundation, users can subscribe to additional moderation decisions from more organizations they trust with industry-specific or community-specific knowledge. (Read more about our stackable approach to moderation here.)

What is your plan for election misinformation?

Aaron Rodericks, Bluesky's Head of Trust & Safety, formerly led election integrity efforts at Twitter and has brought his experience here. Our moderation team reviews content or accounts for misinformation, which users can report directly within the app. In the case of severe violations such as a risk to polling places or election officials, we may remove content or accounts. In most cases, we review claims against credible sources and fact checkers, and may label posts as misinformation.

Journalists can reach us with inquiries at press@blueskyweb.xyz. For our media kit, where you can find our logo and headshots, click here.

Tuesday, 03. September 2024

Microsoft Entra (Azure AD) Blog

MFA enforcement for Microsoft Entra admin center sign-in coming soon

As cyberattacks become increasingly frequent, sophisticated, and damaging, safeguarding your digital assets has never been more critical. In October 2024, Microsoft will begin enforcing mandatory multifactor authentication (MFA) for the Microsoft Entra admin center, Microsoft Azure portal, and the Microsoft Intune admin center.    We published a Message Center post (MC862873) to all

As cyberattacks become increasingly frequent, sophisticated, and damaging, safeguarding your digital assets has never been more critical. In October 2024, Microsoft will begin enforcing mandatory multifactor authentication (MFA) for the Microsoft Entra admin center, Microsoft Azure portal, and the Microsoft Intune admin center. 

 

We published a Message Center post (MC862873) to all Microsoft Entra ID customers in August. We’ve included it below:

 

Take action: Enable multifactor authentication for your tenant before October 15, 2024

 

Starting on or after October 15, 2024, to further increase your security, Microsoft will require admins to use multifactor authentication (MFA) when signing into the Microsoft Azure portal, Microsoft Entra admin center, and Microsoft Intune admin center. 

 

Note: This requirement will also apply to any services accessed through the Intune admin center, such as Windows 365 Cloud PC. To take advantage of the extra layer of protection MFA offers, we recommend enabling MFA as soon as possible. To learn more, review Planning for mandatory multifactor authentication for Azure and admin portals.

 

How this will affect your organization:

 

MFA will need to be enabled for your tenant to ensure admins are able to sign into the Azure portal, Microsoft Entra admin center, and Intune admin center after this change.

 

What to do to prepare:

If you have not already, set up MFA before October 15, 2024, to ensure your admins can access the Azure portal, Microsoft Entra admin center, and Intune admin center. If you are unable to set up MFA before this date, you can apply to postpone the enforcement date. If MFA has not been set up before the enforcement starts, admins will be prompted to register for MFA before they can access the Azure portal, Microsoft Entra admin center, or Intune admin center on their next sign-in. 

 

For more information, refer to: Planning for mandatory multifactor authentication for Azure and admin portals.

 

Jarred Boone

Senior Product Marketing Manager, Identity Security

 

 

Read more on this topic 

Planning for mandatory multifactor authentication for Azure and other administration portals  

 

Learn more about Microsoft Entra  

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. 

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community  

KuppingerCole

Passwordless Authentication for Enterprises

by Alejandro Leal Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

by Alejandro Leal

Explore the rise of passwordless authentication, its security benefits, and how it mitigates common password-based attacks like phishing, brute-force, and ATO fraud. This Buyer's Compass can help you find the solution that best fits your business needs.

Tokeny Solutions

ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization

The post ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization appeared first on Tokeny.

Luxembourg, 3rd September 2024 – ShipFinex, a leading innovator in maritime finance, and Tokeny, the pioneering onchain finance operating system for tokenized securities, are proud to announce a strategic partnership aimed at transforming the way maritime assets are tokenized and managed.

This collaboration brings together two industry pioneers with a shared vision of enhancing transparency, security, and compliance in the tokenization of maritime assets. By joining forces, ShipFinex and Tokeny are poised to set a new standard in the digital finance landscape, particularly within the multi-billion-dollar maritime sector.

Elevating Maritime Finance

Shipping and Maritime Finance have been an exclusive asset class/sector due to limited access in public equity markets and significant initial capital requirements to invest in assets, making it challenging for many to participate despite the Shipping market consistently outperforming many other asset classes.

ShipFinex and Tokeny are committed to democratizing access to maritime investments. Through this partnership, ShipFinex will leverage Tokeny’s cutting-edge technology to ensure that all tokenized maritime assets on its platform meet the highest standards of regulatory compliance and security, using the ERC-3643 standard. This integration not only enhances investor confidence but also positions both companies as leaders in the digital transformation of maritime finance.

Strategic Alignment

The partnership between ShipFinex and Tokeny is a strategic alignment that amplifies the strengths of both companies. ShipFinex’s expertise in maritime finance, combined with Tokeny’s proven track record in tokenized securities infrastructure, creates a powerful synergy that is expected to accelerate the growth and adoption of tokenized maritime assets globally.

Following ShipFinex’s recent announcement of receiving initial approval from VARA in the UAE, this partnership underscores the company’s commitment to adopting world-class solutions to enhance its platform’s security and compliance. This collaboration highlights robust infrastructure and innovative regulated approach underpinning ShipFinex’s operations.

Looking Ahead

This strategic partnership sets the stage for future growth and expansion, as both ShipFinex and Tokeny continue to innovate and lead in their respective fields. The integration of their capabilities will facilitate the broader adoption of tokenized maritime assets, offering investors a secure and efficient marketplace.

About ShipFinex

ShipFinex is revolutionizing maritime finance  by providing a secure, transparent, regulated and efficient marketplace for tokenized maritime assets, enabling global investors to access and trade these assets like never before.

About Tokeny

Tokeny is a leading onchain finance operating system. Tokeny has pioneered compliant tokenization with the open-source ERC-3643 standard and advanced white-label software solutions. The enterprise-grade platform and APIs unify fragmented onchain and offchain workflows, integrating essential services to eliminate silos. It enables seamless issuance, transfer, and management of tokenized securities. By automating operations, offering innovative onchain services, and connecting with any desired distributors, Tokeny helps financial actors attract more clients and improve liquidity. Trusted globally, Tokeny has successfully executed over 120 use cases across five continents and facilitated 3 billion onchain transactions and operations.

Website | LinkedIn | X/Twitter

The post ShipFinex and Tokeny Forge Strategic Partnership to Revolutionize Maritime Asset Tokenization appeared first on Tokeny.


PingTalk

Tailored Government ICAM Capabilities in FedRAMP High & DoD IL5

Ping Identity expands its FedRAMP High and DoD IL5 offerings with the addition of critical identity, credential, and access management capabilities.

If you're in the government space, you know how crucial it is to balance robust security and access with a seamless digital experience. We're thrilled to announce some major updates to Ping Government Identity Cloud that'll make your lives a whole lot easier. These capabilities are crucial for hitting essential security benchmarks, especially for DoD agencies and the Defense Industrial Base (DIB).

Monday, 02. September 2024

Dock

Dock and cheqd Form Alliance to Accelerate Global Adoption of Decentralized ID

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance. By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals

We are excited to announce that the Dock and cheqd tokens and blockchains are merging to form a Decentralized ID alliance.

By harnessing the combined strengths of two industry pioneers, Dock and cheqd will accelerate the global adoption of decentralized identity and verifiable credentials, empowering individuals and organizations worldwide with secure and trusted digital identities.

Existing $DOCK tokens will be converted into $CHEQ tokens (pending governance approval from token holders in both communities). This will mark a new chapter of opportunity for our token holders who will benefit from all the Web3 resources cheqd has at their disposal. 

Full article: https://dock.io/post/dock-and-cheqd-form-alliance-to-accelerate-global-adoption-of-decentralized-id


KuppingerCole

SOAR Platforms and Generative AI: Building an AI-Skilled Workforce

by Alejandro Leal From Luddites to AI Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution. Throughou

by Alejandro Leal

From Luddites to AI

Legend has it that in 1779, a man named Ned Ludd, angered by criticism and orders to change his traditional way of working, smashed two stocking frames. This act of defiance became emblematic of the “Luddite” movement against the encroaching mechanization that threatened the livelihoods of skilled artisans during the early Industrial Revolution.

Throughout history, workers have adapted to new technologies, from the complex machinery of the Industrial Revolution to today's sophisticated AI systems. Initially, industrial workers had to master mechanical operations to support mass production. Later, the digital revolution demanded proficiency with computers for a variety of tasks.

Now, the integration of AI in workplaces emphasizes skills in managing and leveraging intelligent systems to boost productivity and decision-making processes. This ongoing evolution demonstrates the need for continuous learning and adaptability, underscoring the increasing complexity of skills involved in today’s jobs.

The Evolving Role of Cybersecurity Analysts

Building an AI-skilled workforce requires not only equipping professionals with the tools and knowledge necessary to leverage AI technologies, but also addressing the persistent challenges of the human factor in cybersecurity by implementing the right tools, cultivating a cybersecurity culture, and fostering new skills.

For example, the art of prompt engineering is a relatively new and useful skill. This discipline allows analysts to develop and optimize prompts to use Large Language Models (LLMs) efficiently. These prompts are designed to optimize the language model's performance, ensuring that it produces the desired output with minimal computational resources. For security analysts, generative AI offers a remarkable leap forward in the effectiveness of their work.

The integration of generative AI into Security Orchestration, Automation, and Response (SOAR) platforms has the potential to change the role of Security Operations Centre (SOC) analysts. This technology automates routine tasks, allowing analysts to spend more time on strategic aspects of their roles, such as planning new defensive strategies, identifying emerging threats, and formulating proactive mitigation plans.

Balancing Innovation and Responsibility

However, the potential use of generative AI goes beyond simply automating tasks or interacting with a chatbot. For instance, SOC analysts can now use generative AI to craft detailed playbooks that document the steps taken during an incident response. This documentation process not only automates responses but also builds a knowledge base that can inform future responses.

SOC analysts can also use generative AI to create alerts and perform tasks such as threat detection, incident analysis, summarize events, create reports, enhance decision making, suggest playbook templates, etc. While the integration of generative AI into SOAR platforms offers substantial benefits, there are several challenges that need to be addressed.

Generative AI requires access to vast amounts of data to learn and make decisions. Ensuring that this data is handled securely and in compliance with privacy regulations is a significant challenge. In addition, there is a risk that AI models may develop biases based on the data they are trained on, which can lead to inaccurate or unfair outcomes.

Therefore, the use of generative AI must be accompanied by thorough quality control on the part of the vendor, to ensure that the information provided is indeed useful and accurate. This balanced approach reflects a careful consideration of both the opportunities and the complexities involved with integrating new technologies into security operations.

While some vendors are highly optimistic about the transformative potential of generative AI in SOAR solutions, others remain cautious, choosing to monitor the industry's development closely. These cautious vendors prioritize understanding how to align with customer expectations and carefully evaluate the practical advantages and potential challenges of implementing generative AI.

Great Expectations

By harnessing the potential of generative AI, however, SOC analysts can broaden their scope within cybersecurity practices, cultivating new knowledge and developing new skills.  While Ludd's reaction was to destroy the machines he feared would replace human craftsmanship, the challenge now is not to resist technological advancement, but to integrate it. This approach reflects a broader trend in AI development, where the goal is not to replace human endeavor, but to augment it.

As a result, vendors should prioritize transparency in their marketing to demonstrate the practical value of generative AI, rather than relying on hype or jargon. This approach not only educates customers about the capabilities and limitations of generative AI but also helps in setting realistic expectations. For more on this, see my colleague John Tolbert's blog post on Some Direction for AI/ML-ess Marketing.

Join us in December in Frankfurt at our cyberevolution conference, where we will continue to dissect how AI is used in cybersecurity.

See some of our other articles and videos on the use of AI in security:

Cybersecurity Resilience with Generative AI

Generative AI in Cybersecurity – It's a Matter of Trust

ChatGPT for Cybersecurity - How Much Can We Trust Generative AI?

Asking Good Questions About AI Integration in Your Organization

Reflections & Predictions on the Future Use (and Mis-Use) of Generative AI in the Enterprise and Beyond


Passwordless Authentication for Enterprises

by Alejandro Leal This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication techn

by Alejandro Leal

This report provides a detailed examination of passwordless authentication technologies designed for enterprise use cases. As organizations increasingly prioritize robust and streamlined security protocols, the demand for sophisticated passwordless solutions has grown significantly. This report explores the current landscape of enterprise-focused passwordless authentication technologies and guides businesses in selecting the most effective solution to meet their security needs. By analyzing the market segment, vendor product and service functionality, relative market share, and innovative approaches, organizations can make informed decisions about their authentication strategies for their employees and systems.

Finema

This Month in Digital Identity — September Edition

This Month in Digital Identity — September Edition Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering: AI Enhancing Healthcare Fraud Prevention Artificial Intelligence (AI) is b
This Month in Digital Identity — September Edition

Welcome to the September edition of our monthly digital identity series! This month, we’re exploring the critical developments and innovative strategies that are redefining the landscape of digital identity. Here’s a closer look at the essential topics we’ll be covering:

AI Enhancing Healthcare Fraud Prevention

Artificial Intelligence (AI) is becoming a crucial tool in combating healthcare fraud by analyzing vast datasets in real-time to detect fraudulent activities, particularly through voice biometrics that verify patient identities and prevent unauthorized access to healthcare services. Additionally, there is a growing focus on enhancing patient experiences through digital trust technologies, such as secure digital signatures and messaging platforms, which protect patient data and streamline healthcare processes. Innovations like chip-based ID cards are also being adopted, as seen in Vietnam, to secure patient information and simplify access to healthcare services, reducing the risk of identity theft and fraud. These technological advancements collectively aim to strengthen the integrity of healthcare systems, safeguard patient data, and improve operational efficiency, ultimately enhancing the overall patient experience.

Somalia’s Financial Inclusion Drive

Somalia is advancing its digital transformation with a new Memorandum of Understanding (MoU) between the National Identification and Registration Authority (NIRA) and the Somali Banks Association (SBA) to drive financial inclusion through the national ID program. Launched a year ago, this program aims to provide the 18 million residents with a unified identity, facilitating access to banking services and aligning with global standards. The partnership seeks to enhance financial security, reduce fraud, and streamline banking processes by using the National Identification Number (NIN) for customer verification. This initiative is part of a broader effort to bolster the country’s economy, ensure compliance with international regulations, and increase public trust in financial institutions. The collaboration has been praised by key government figures and international partners, who see it as crucial for Somalia’s development. Ongoing consultations with stakeholders aim to further strengthen the national ID system, making it more impactful in supporting economic growth and modernizing financial services.

Spain’s New Age Verification System

Spain has introduced technical specifications for a new online age verification system aimed at controlling minors’ access to adult content, using W3C Verifiable Credentials (VCs) as the core technology. This approach addresses growing concerns over the negative impact of unrestricted access to adult content on the mental health and social skills of children and teenagers. By implementing W3C VCs, Spain ensures that age verification is conducted securely and privately, without disclosing personal information, thus aligning with GDPR principles. W3C VCs offer unmatched security through advanced cryptographic methods, enhanced privacy by allowing users to share only necessary information, and portability by integrating seamlessly with digital wallets. The system also follows the OpenID For Verifiable Presentations (OpenID4VP) specification, ensuring secure and private verification, and includes a trust management framework to ensure only authorized entities can issue or verify credentials, making it an ideal solution for protecting minors online.

The Digital Travel Credential (DTC)

In the realm of digital identity, numerous digital credentials are vying to replace physical documents, with the European Union’s eIDAS 2.0 and digital driver’s licenses being notable examples. However, none match the Digital Travel Credential (DTC) standard for digital trust, developed by the International Civil Aviation Organization (ICAO), which sets the universal standards for passports. The DTC, designed as the digital equivalent of a passport, offers two types: one created by a user from their physical passport and another issued directly by passport authorities. Indicio and SITA pioneered the implementation of the Type 1 DTC, which is now being adopted by countries and airlines for seamless travel. The DTC’s strength lies in its use of cryptographic verification, ensuring that passport data is securely held on a user’s device without needing to be stored in centralized databases, mitigating risks of data breaches. By scanning their passport, users can verify the authenticity of their data, bind it to their device through biometric checks, and ensure that their digital credentials are trustworthy and tamper-proof. This system provides airlines, airports, and border control with the confidence to streamline travel processes, knowing that the data in the DTC is authenticated, portable, and instantly verifiable.

We look forward to bringing you more insightful updates as we continue to explore the latest trends and innovations in the field of digital identity. Stay tuned for future editions of our monthly segment!

This Month in Digital Identity — September Edition was originally published in Finema on Medium, where people are continuing the conversation by highlighting and responding to this story.


Metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

POSTECH Adopts Metadium Mainnet-Based Smart Student ID Dear Community, We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology. Here are the unique features that make POSTECH’s smart stu

POSTECH Adopts Metadium Mainnet-Based Smart Student ID

Dear Community,

We have some exciting news to share. Pohang University of Science and Technology(POSTECH) has adopted a blockchain-based smart student ID using Metadium’s mainnet. This significant achievement demonstrates the excellence and reliability of Metadium’s technology.

Here are the unique features that make POSTECH’s smart student ID stand out:

Security and Privacy: Students’ personal information is securely protected through the Metadium mainnet, making it impossible to falsify or tamper with user information.

Convenient Use: Using blockchain-based DID authentication, users can manage their personal information and selectively submit information. Additionally, students can easily issue and use mobile student IDs remotely through their smartphones.

Efficient Management: The university can now issue mobile smart student IDs through an online automated process, in addition to plastic student IDs, enabling more efficient workflow improvements.

This case at POSTECH is an excellent example of how blockchain technology can be applied to make our lives more convenient. Our Metadium team will continue to strive for more universities and institutions to use Metadium’s technology.

We are truly grateful for the unwavering interest and support from the Metadium community. We eagerly look forward to your continued support.

Thank you.

안녕하세요, 메타디움 커뮤니티 여러분!

기쁜 소식이 있습니다. 포항공과대학(포스텍)이 메타디움의 메인넷을 기반으로 한 블록체인 스마트 학생증을 채택했습니다. 이는 메타디움 기술의 우수성과 안정성을 입증하는 중요한 성과입니다.

포항공과대학 스마트 학생증의 주요 특징은 다음과 같습니다. 안전성 및 개인정보 보호: 메타디움 메인넷을 통해 학생들의 개인정보가 안전하게 보호되어 사용자 정보의 위, 변조가 불가합니다. 편리한 사용: 블록체인 기반의 DID인증을 적용함으로써 사용자 스스로 개인정보를 관리할 수 있고 정보의 선택적 제출이 가능해집니다. 또한 비대면으로 모바일 학생증을 발급할 수 있게 됩니다. 또한 학생들은 스마트폰을 통해 비대면으로 간편하게 모바일 학생증을 발급받고 사용할 수 있게 됩니다. 효율적인 관리: 대학 측에서는 플라스틱 학생증과 별도로 스마트학생증을 온라인 자동화 업무 프로세스로 발급할 수 있게 되어 효율적 업무 개선이 가능합니다.

이번 포항공과대학의 사례는 블록체인 기술이 우리의 생활을 어떻게 더 편리하게 만드는데에 적용될 수 있는지를 보여주는 좋은 예시입니다. 저희 메타디움 팀은 앞으로도 더 많은 대학과 기관에서 메타디움의 기술을 사용할 수 있도록 노력하겠습니다.

메타디움 커뮤니티 여러분의 지속적인 관심과 지원에 감사드리며, 앞으로도 많은 성원 부탁드립니다.

감사합니다.

-메타디움 팀

Website | https://metadium.com

Discord | https://discord.gg/ZnaCfYbXw2

Telegram(EN) | http://t.me/metadiumofficial

Twitter | https://twitter.com/MetadiumK

Medium | https://medium.com/metadium

POSTECH Adopts Metadium Mainnet-Based Smart Student ID was originally published in Metadium on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 01. September 2024

KuppingerCole

Generative AI in SOAR: Balancing Innovation and Responsibility

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc. In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructur

Generative AI is ubiquitous - anyone can use ChatGPT and other tools for free to create text, images, and more. But generative AI also has potential in the professional environment. Businesses should consider how they can leverage the use of AI with prompt engineering etc.

In this episode, Alejandro and Matthias discuss the integration of machine learning and AI into cybersecurity infrastructures, particularly SOARs. The conversation covers the role of generative AI in changing the daily tasks of cybersecurity professionals, the challenges of integrating generative AI into SOAR platforms, the importance of prompt engineering, and the need for a balanced approach to innovation and accountability. It also addresses the security and ethical considerations of using AI in cybersecurity and the general impact of generative AI on different industries.



Friday, 30. August 2024

auth0

Deploy Secure Spring Boot Microservices on Azure AKS Using Terraform and Kubernetes

Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Azure AKS using Terraform and Kubernetes.
Deploy a cloud-native Java Spring Boot microservice stack secured with Auth0 on Azure AKS using Terraform and Kubernetes.

Okta Fine Grained Authorization is now Available in Private Cloud on AWS

Now, you can deploy Okta FGA in several AWS regions with high availability and requests per second.
Now, you can deploy Okta FGA in several AWS regions with high availability and requests per second.

Thursday, 29. August 2024

Spruce Systems

Why the U.S. Post Office is Key to Fighting AI Fraud

Pending legislation could transform the venerable USPS into a key player in the fight against fraud.

For years now, the United States Postal Service has been struggling to adjust to the digital world, as the decline of letter mail has left the agency’s budget in shambles. That’s a threat to the Postal Service’s role in connecting all Americans.

Fortunately, a bill under consideration in the U.S. Senate, the POST ID Act, would reinvigorate the venerable service for a new era, help improve USPS’s budget woes – and make it a powerful asset for digital security. The bill proposes using physical Post Office locations to offer real-world identity verification – verification that would, in turn, help fight fraud and disinformation online

That’s similar to the way DMV locations in states like California issue both traditional and digital driver’s licenses. But the Post Office could play a much broader role: the bill’s bipartisan sponsors, Bill Cassidy (R-LA) and Ron Wyden (D-OR), want to allow the Post Office to perform identity verifications for an array of private clients, in addition to public sector agencies it already serves. Combined with some product strategy, this new paid service could help to balance the agency’s budget as well.

This new USPS service would be an extension of the agency’s longtime work connecting people against all obstacles. Instead of refusing to stop for “snow nor rain nor heat nor gloom of night,” this new Postal Service would also be tasked with helping overcome hackers.

A Physical Network for the Digital Age

Senator Wyden was absolutely spot-on when he said that “AI deepfakes have added a whole new challenge for the most common [online identity] verification methods. The best way to confirm who someone is, is in-person verification.”

Wyden’s warning came in October of last year, and the threat of AI has only become more obvious since then. That includes a recent report that artificial intelligence was being used to create convincing fake ID cards at an unprecedented scale, and the equally concerning evolution of deepfake tools into the realm of video, allowing convincing live impersonation online.

But those tricks don’t work in the physical world. Only a real, natural human can walk up to the counter at a Post Office and seek identity verification by a fellow human. Not just physical appearance, but also biometrics like fingerprints are much harder to fake in person than online.

There are very few entities of any sort better positioned to conduct that affirmation than the U.S. Post Office. The USPS has a staggering 31,123 locations across practically every corner of America - even without including locations operated under contract. Post Offices can be found in far-flung U.S. territories like Guam, or at the far northern edge of Alaska, guaranteeing new verification services can be accessed by very nearly every American.

Once an identity is verified in person, it can be digitally recorded using new digital identity credential technology that is extremely trustworthy and secure—and even lets users verify their humanness without revealing their identity.

The Power of Cryptography

The Cassidy-Wyden bill would give the USPS new responsibilities for verifying natural humans, and the ability to serve an array of clients would create a new stream of revenue for the agency. Those verifications would then need to be represented as a trustworthy “digital credential” for users to present online. Luckily, such systems already exist, for instance, in the form of the digital driver’s license offered in California and a growing list of other states.

Trustworthy digital credentials rely on a mix of innovative encryption and widely available hardware – specifically, your mobile phone. In broad outline, a credential issuer like the DMV or Post Office would have a unique digital ‘signature’ tied to a secure computer on-site. After conducting identity verification, the USPS office would digitally sign a credential using the “secure element” chip in the recipient’s mobile phone. This credential could then be presented in a variety of contexts to help a user prove their identity.

The details of the “identity” that a user wants to prove can vary widely, and digital credentials of this sort are very flexible. A common feature of digital credentials is what’s known as “selective disclosure,” which lets a credential holder share only the minimum required information in a particular interaction. 

At its most minimal, a digital credential issued by the USPS could prove only that the holder is a real human being without disclosing any other identifying data. As laid out in a recent research paper by a coalition including researchers from SpruceID, this simple “personhood credential” could be a key element in the fight against costly identity fraud and toxic disinformation online.

Expanding the Network of Trust

The incredible omnipresence of USPS locations makes it an ideal candidate, alongside DMVs, to lead the charge for in-person identity verification and issuance. We can still think bigger, though.

Other trusted entities might be brought into the in-person verification network, expanding access and convenience even further. Candidates might include other shippers, such as UPS and FedEx, who have extensive physical networks and address and other data that can help confirm identities. In the most rural or remote parts of America, retailers might be recruited to the network, though they would require significant additional equipment and training. One benefit of allowing certified private sector participants to also provide in-person identity verification is to keep costs low for users and businesses, while incentivizing competition and innovation.

Over time, the identity verification process would also be streamlined for efficiency and convenience. One major potential efficiency would be collecting an applicant’s data online before an in-person verification session, reducing wait times and workloads. Streamlining of this sort would be particularly important since some digitally signed credentials need to be refreshed more often than conventional physical identity documents.

Offering identity verification via Post Office locations would be part of a yet more expansive system of verifications built on a shared standard for data formats, security practices, and privacy measures. The larger system that SpruceID is helping drive forward is flexible, offering various options for credential holders to choose what data they share.

But perhaps the most important yet challenging feature of this emerging system is creating broad access to in-person verification. For that, the good old Post Office will be hard to beat.

To learn more about SpruceID and our approach to fighting AI fraud, visit our website.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


liminal (was OWI)

Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™

With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By […] The pos
With information overload becoming a constant challenge, quickly accessing relevant and actionable insights is essential to making informed decisions and staying competitive. The Link Market Monitor, powered by expert-in-the-loop AI technology, combines real-time data with expert analysis to cut through the noise and surface what’s important to you—and what you should do about it. By delivering only the most pertinent market signals, it allows you to efficiently spot trends and seize new opportunities. This guide will show you how to use the Market Monitor to tailor insights to your needs, ensuring you’re always a step ahead. Step 1: Accessing the Market Monitor™ From the Dashboard: Navigate to your Link’s dashboard. Look for the Market Monitor widget, which displays recent headlines from your top monitors. Click on the widget to be taken directly to the Monitors Page. Using the Left Navigation Menu: In the platform’s main interface, locate the “Market Monitor” link in the left-hand navigation menu. Click on it to access the Monitors Page. Step 2: Setting Up Your Tailored Monitors On the Monitors Page, you’ll find a list of pre-configured monitors that align with your industry interests, such as “Emerging Technologies,” “Competitive Landscape,” or “Market Trends.” Click the “create new monitor” button to create a new monitor that meets your specific needs. Here, you can specify companies, sectors, themes, keywords, and more to tailor your monitor’s focus. Step 3: Exploring and Curating Insights Opening a Monitor: Click “Open Monitor” on any monitor card you’ve created. You’ll be directed to the Monitor Detail Page, where a curated newsfeed offers real-time insights filtered by your set criteria. Interacting with Curated Content: Scroll through the newsfeed to browse relevant articles and updates. Click on any article to open it in the reading pane, where you can explore the details. Use the filter bar at the top of the page to further refine the content within your monitor, ensuring you see only the most relevant insights. Step 4: Leveraging Expert-in-the-Loop AI for Personalized Insights The Link Market Monitor utilizes expert-in-the-loop AI technology, which combines real-time data with expert analysis to deliver personalized insights. As you interact with the monitors, the AI engine continuously learns from your preferences, fine-tuning the content it delivers to ensure it remains highly relevant to your needs. Step 5: Receiving Real-Time Alerts and Updates Set up real-time alerts to stay informed without the noise. The Market Monitor’s AI engine filters out irrelevant information, sending you only the most pertinent updates. Customize your alerts to focus on key trends, opportunities, and competitive threats, ensuring you never miss a critical development in your industry. Step 6: Sharing Insights with Your Team Collaborating on Strategies: Use the shared monitors to collaborate effectively, ensuring your team is aligned with the latest market intelligence and ready to make informed decisions.

Best Practices:

Regularly Update Your Monitors: As your business goals evolve, update your monitors to reflect new priorities and market conditions. Maximize AI Insights: Leverage the expert-in-the-loop AI to refine and improve the relevance of your insights continuously. Focus on What Matters: Use the real-time signals to stay on top of key developments, allowing you to react swiftly to market changes.

Why the Market Monitor™ is Essential for Business Leaders

Proactive Decision-Making: The Market Monitor™ equips you with the most relevant insights, empowering you to stay ahead of market trends and shifts. By providing timely, actionable information, it allows you to anticipate changes and make decisions that drive your organization forward. Enhanced Strategic Focus: As an business leader, focusing on what truly matters is crucial. The Market Monitor™ filters out irrelevant data and surfaces only the most pertinent signals, ensuring your strategic decisions are based on insights that directly impact your business objectives. Continuous Adaptation: The expert-in-the-loop AI technology behind the Market Monitor™ ensures that the insights you receive are always aligned with current market conditions. As your business environment evolves, the Market Monitor™ adapts to provide you with up-to-date, relevant information, helping you stay agile in a competitive landscape. Collaborative Insight Sharing: Effective leadership involves ensuring your entire team is aligned with the latest intelligence. The Market Monitor™ facilitates seamless collaboration by allowing you to share tailored insights across your organization, enabling informed, unified decision-making. Strategic Empowerment: In a complex and fast-paced industry, having the right information at the right time is crucial. The Market Monitor™ empowers you with the knowledge and tools needed to navigate market complexities confidently, helping you lead your organization to sustained success.

The post Link How-To: Curate Actionable Insights and Gain a Competitive Edge with the Market Monitor™ appeared first on Liminal.co.


Spherical Cow Consulting

Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities

Privacy-Enhancing Technologies (PETs) are essential for safeguarding digital identities amidst increasing data breaches. They encompass tools like zero-knowledge proofs and advanced biometrics to secure both human and non-human identities in the digital space. As digital identity expands to include non-human entities, PETs are vital for ensuring privacy and security. Zero-knowledge proofs (ZKPs) e

I want to talk about PETs. No, not about my cats (though they are awesome), but about Privacy-Enhancing Technologies.

Not a day goes by without learning about another data breach that is exposing critical details about people and things online. Enter Privacy-Enhancing Technologies (PETs)—a critical component in digital security. These tools, like zero-knowledge proofs and advanced biometrics, are designed to safeguard digital identities while allowing people and things to get work done.

The rise of privacy-enhancing technologies (PETs) like zero-knowledge proofs and advanced biometrics is reshaping how we think about and manage digital identity. But what’s driving this change, and why should it matter to you, whether you’re managing user access or overseeing countless processes and APIs in the cloud?

All Identities Need PETs

Digital identity isn’t just about people anymore. Sure, your personal online identity—how you log in, interact, and transact—remains essential. But increasingly, digital identity also includes non-human entities like software processes, APIs, and entire cloud workloads. These non-human identities need the same attention to security and privacy as human ones, especially as they become more central to how businesses operate.

When I first started thinking about digital identity, it was all about ensuring the right people had access to the right resources. Today, though, we’re dealing with identities that aren’t people at all—identities that exist in the cloud, managing everything from payroll to AI model training, often without any direct human oversight or even a human-like credential. And these identities need to be just as secure, if not more so, given the scale and complexity they operate within.

Human and Non-Human Considerations

Biometrics like facial recognition and fingerprint scanning have long been used to verify human identities. There’s a lot of work in the field of biometrics, especially with concerns about deepfakes making Ye Olde Fashioned liveness detection hardly a thing. But what about non-human identities? While biometrics might not apply directly, the principles of unique identification and secure access certainly do. For instance, in a cloud environment, processes and APIs need to be uniquely identified and authorized—much like a person—but with a focus on speed, scalability, and automation.

So, two challenges: ensuring that human identities are securely managed while also creating systems that can handle the massive scale of non-human identities. Whether it’s a government-issued digital credential or a cloud-based process, the goal is the same: secure, reliable, and privacy-respecting identity management.

Addressing Privacy Concerns with Digital Credentials

Governments are moving towards digital credentials to improve security and convenience. But this shift brings new privacy challenges. For humans, the way these credentials are issued and managed has significant implications for personal privacy. PETs like zero-knowledge proofs are becoming crucial to ensure that sensitive information remains private, even when it’s used to prove identity.

For non-human identities, the concerns are different but equally important. In cloud environments, digital credentials need to be robust enough to manage the complex interactions between countless processes and APIs, all while maintaining strict access controls and minimizing the risk of breaches.

Of course, if it was easy, I wouldn’t be writing about it. Standards organizations like the IETF are trying to define what a credential should look like in a scenario where it may or may not be for a person (that’s work in SPICE). They’re also trying to define the best way to move those credentials around from one cloud service to the next, given those cloud services don’t exactly speak the same languages (that’s work in WIMSE). And these days we can’t have those conversations without considering the privacy implications of all of it.

Zero-Knowledge Proofs: PETs for All Identities

Which takes us to an area I find fascinating: Zero-Knowledge Proofs (ZKPs). ZKPs are a game-changer for both human and non-human identities. They allow for the verification of information without revealing the underlying data, making them perfect for situations where privacy is paramount. To put it another way, a ZKP will tell you that the proof is true without actually exposing any of the data that is included in the proof.  “Is this mobile driver’s license valid” becomes a question that can be answered without exposing any of the data in the mDL. It’s magic, I tell you, pure magic. (And math. Lots and lots of math.)

In the human world, this might mean you will be able to prove your identity without exposing personal details. In the non-human world, ZKPs can help secure interactions between cloud processes, ensuring that only authorized entities can access sensitive data or perform critical operations. This approach not only protects individual privacy but also bolsters the security of complex digital ecosystems.

Why aren’t ZKPs widely deployed? Because the math involved is incredible, and not all devices can actually handle the necessary computations in the time people expect their web pages to load or their APIs to run. But that’s today; tomorrow is going to be an entirely different story as hardware improves.

Visiting the PETs Shop

Technology is at the heart of these advances. From cryptography to AI, new tools are making it possible to protect digital identities against a range of threats. But with great power comes great responsibility. Whether it’s human users at risk from phishing attacks or non-human processes vulnerable to security breaches, there will never be a point where security and privacy are guaranteed. Innovation will always be necessary to get ahead of bad actors.

For human identities, this might mean adopting stronger authentication methods. For non-human identities, it could involve developing more sophisticated ways to manage and secure API interactions across multiple cloud environments. The challenge is ensuring that these technologies are both effective and adaptable, capable of protecting identities at scale.

PETs Need to be Everywhere

As digital identity continues to evolve, the line between human and non-human identities will blur further. In commerce, for example, digital identities—whether of customers or the processes serving them—are becoming central to every transaction. The transactions may trigger any number of APIs and services that go far beyond a single person’s digital identity. And since all problems have not been solved, businesses are going to have to support the innovation necessary to keep their data safe.

Wrap Up – Loving Your PETs

The future of digital identity is definitely not boring! PETs play a crucial role in shaping how we protect digital identities and are definitely worthy of some focused attention. It’s not the only piece of the puzzle in keeping our data safe, but it’s a biggy.

For tech leaders, I’m afraid you have another area of technology you need to keep on your radar. Your organization must engage in shaping privacy-enhancing digital identity solutions. Don’t just install them, think about how they meet tomorrow’s requirements. Better yet, be a part of defining tomorrow’s requirements in the standards being developed today.

For individual contributors like me, it’s crucial to stay informed. Keep up with the latest security practices, and be on the lookout for open calls for comments on the standards that impact this space. Your voice matters in shaping the standards and regulations in this space.

And if keeping track of all this sounds overwhelming, why not let someone else do the heavy lifting? Reach out to me; let’s chat about how I can help by providing regular updates and insights, tailored to your needs. You don’t have to do this alone.

The post Privacy-Enhancing Technologies: Protecting Human and Non-Human Identities appeared first on Spherical Cow Consulting.


IDnow

AML compliance in 2024: Assessing the effectiveness of AMLD6 and EU’s new AML package.

We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe.  Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states.  […]
We explore the EU’s new AML package of rules and consider how it will affect the future of compliance in Europe. 

Ever since the first directive to combat money laundering and the financing of terrorism was issued in 1991, the European Union has continued to improve and harmonize the legislative arsenal of its member states. 

In the space of 30 years, six dedicated Anti-Money Laundering Directives (AMLD) have been issued. The first was mainly aimed at combating drug-related offences and introduced the first KYC provisions. The 4th and 5th Directives (AMLD4 & AMLD5) brought in increased transparency obligations, including access to beneficial ownership registers and strengthening controls on virtual currency transactions. With each new iteration, the scope of protection has expanded significantly and now covers many areas, ranging from art dealing to cryptocurrency trading.  

A major development to AML controls came in May 2024 with the release of the AML package, a set of legislative proposals aimed at strengthening the EU’s AML/CFT rules. The AML package aims to close regulatory gaps, strengthen cooperation between member states and ensure uniform application of the rules across the EU.

The AML package is well on its way to become a comprehensive model for the banking industry. It offers uniformity and efficient applications of AML requirements, and the combined rule sets cover top-level economic decision making all the way to daily life for individuals. However, the legislation and regulations are often tinged with a somewhat negative reputation as their final form can stifle innovation, rather than protecting the people they claim to serve. 

Analysts and pundits commend the EU for its outreach to seek input and collaboration for new legislation, but final forms of initiatives rarely resemble the spirit in which they began. This is exemplified in the Draghi Report of September 2024 that discusses European competitiveness.

As the AML package is being finalized, there is still the opportunity for strong private sector collaboration. If done right, this brings Europe close to ‘digital first’ solutions that are standardized, scalable and competitive on a global scale.

Rayissa Armata, Director of Global Regulatory and Government Affairs at IDnow.

“This would better ensure a more level playing field for both traditional services alongside rapidly growing industries such as crypto, blockchain, and digital identity verification processes based on more secure frameworks. If such points are harmonized and implemented properly, Europe has a strong chance to be a leader in the next phase of development in the digital economy,” adds Rayissa.

Here, we explore some of the new rules and consider the effect it may have on AMLD6 and the future of compliance in Europe. 

5 new changes to AML rules and regulations in 2024.  A new European Anti-Money Laundering Authority (AMLA) has been established and will be operational in Frankfurt from 2025. With a staff of 400, it will centralize anti-money laundering efforts, coordinate national authorities and conduct cross-border investigations.  A directive which will further tighten criminal provisions and procedures that need to be adopted by member states to improve the AML/CFT regime. A regulation that will introduce harmonised rules that will be directly applicable as a regulation to combat money laundering and terrorist financing across all EU member states. Crypto-asset service providers will now be required to collect and store information on the source and beneficiary of the funds for each transaction. This rule, known as the “travel rule”, already exists in traditional finance and requires that information on the source of the asset and its beneficiary travels with the transaction and is stored on both sides of the transfer. CASPs will be obliged to provide this information to competent authorities if an investigation is conducted into money laundering and terrorist financing. This means that businesses operating in these spaces must adopt harmonized verification standards, aligning with those used by traditional financial institutions. A directive on Access to Centralized Bank Account Registers: This directive makes information from centralized bank registers available to member states. This contains data relating to the identity and location of bank account holders – through a single access point.  Regulations, directives and AMLD6 changes.

It’s important to note that there is an Anti-Money Laundering Regulation (AMLR) and Anti-Money Laundering Directives.

AMLR focuses more on regulatory and supervisory mechanisms, while directives, such as AMLD6 enhances the criminal law framework for tackling money laundering. Together, these laws are designed to increase financial transparency, make it harder to use the financial system for illicit purposes, and ensure that there is greater accountability for both individuals and legal entities involved in money laundering.

The AMLR provides a uniform set of standards directly applicable across the EU, ensuring consistency in financial and compliance procedures. AMLD6, however, allows member states some flexibility in how they apply criminal sanctions and enforcement measures, provided they align with the directive’s goals. Together, AMLR and AMLD6 form a cohesive framework within the AML Package.

AMLD6, which came into force in December 2020, has introduced several new legal provisions and expanded the list of criminal offences related to money laundering. Faced with the diversification of money laundering schemes, it now includes offences that go beyond simple financial crime. There are now 22 additional offences, including environmental crimes, tax crimes and cybercrime.  

AMLD6 also encourages member states to prosecute “facilitators” who help to carry out illegal activities. How member states should prosecute is also being revised and AMLD6 seeks to improve the deterrent effect of existing legislation by imposing tougher penalties. EU member states are now required to impose prison sentences of at least four years for serious money laundering offences, with heavier penalties for repeat offenders. Significant financial penalties are also issued (up to €5 million for individuals), to deprive the culprits of any profit derived from illicit activities. 

Another major development is the expansion of who should be held responsible for money laundering. From now on, legal entities could be liable for money laundering offences committed by their employees. Companies may also be subject to severe penalties, which could result in the company’s closure. Executives may also be held liable for money laundering offences committed within their organization as part of the EU’s plan to adopt “effective, proportionate and dissuasive criminal sanctions“.  

Recognizing the transnational challenges posed by organized crime and money laundering, AMLD6 promotes a rapid and effective exchange of information on suspicious transactions and ongoing investigations, as well as enhanced legal assistance in the collection of evidence and freezing of assets. It also promotes cooperation with specialized European agencies, such as Europol and Eurojust to facilitate the coordination of cross-border investigations. 

Finally, the legislation contains enhanced due diligence provisions for wealthy individuals with assets of more than €50 million, excluding their main residence, as well as an EU-wide limit of €10,000 for cash payments. 

The future of AML compliance. 

The implementation of AMLD6 has significant implications for businesses and financial institutions. Companies will now be required to protect themselves against compliance risks and adopt appropriate control mechanisms and systems, conduct regular audits, and raise awareness among their employees. This includes investing in advanced transaction monitoring and analysis technologies to proactively detect suspicious financial activity. These actions are necessary to protect the integrity of the company, avoid severe penalties, and maintain stakeholder trust. 

In addition, many industries that were not previously required to comply with certain AML regulations will now need to be more transparent with their transactions. For example, from 2029, top-tier professional football clubs involved in large-scale financial transactions, whether with sponsors, advertisers or in the context of player transfers, will have to comply with certain KYC rules. Like the financial sector, football clubs will have to verify the identity of their customers, monitor transactions and report any suspicious transactions to the FIUs. 

As money laundering and terrorist financing is a global problem, measures adopted at EU level must be coordinated with international measures otherwise they will have a very limited effect. The European Union must therefore continue to consider the recommendations of the Financial Action Task Force (FATF) and other international bodies active in AML/CFT. 

The new package of AML rules has now been entered into the EU’s Official Journal, which means that companies will have up to two years to implement some measures and three years for others.  

Building trust through KYC in banking. How can you set up a KYC process that satisfies your customers and meets regulatory requirements? Download now to discover: What is KYC? The importance of KYC in the banking sector Regulatory impact on KYC processes Read now

By

Mallaury Marie
Content Manager at IDnow
Connect with Mallaury on LinkedIn


liminal (was OWI)

The Increasing Role of Behavioral Biometrics for ATO Prevention in Banking

The post The Increasing Role of Behavioral Biometrics for ATO Prevention in Banking appeared first on Liminal.co.

DHIWay

Product tracking, tracing and authenticity using CORD

The post Product tracking, tracing and authenticity using CORD appeared first on Dhiway.

Issue verifiable credentials using MARK Studio

The post Issue verifiable credentials using MARK Studio appeared first on Dhiway.

BlueSky

Crie um Pacote Inicial!

Crie um pacote inicial hoje — convites personalizados que trazem amigos diretamente para o seu espaço no Bluesky.

To learn how to create a starter pack in English, read our guide here.

Hoje, estamos lançando os pacotes iniciais — convites personalizados que permitem que você traga amigos diretamente para o seu espaço no Bluesky!

Um exemplo de pacote inicial.

Recomende feeds personalizados e usuários para ajudar sua comunidade a se encontrar. Comece na aba Pacotes Iniciais no seu perfil do Bluesky.

O que há em um pacote inicial? Feeds personalizados. No Bluesky, você pode definir qualquer algoritmo ou tópico como sua linha do tempo principal. Exemplos incluem Postadores Quietos (posts dos seus seguidores mútuos mais silenciosos) e Colocando em Dia (posts mais populares das últimas 24 horas). Recomendações de quem seguir. Adicione suas contas favoritas e encoraje novos usuários a segui-las. Como criar um pacote inicial? Clique na aba Pacotes Iniciais. No seu perfil, ao lado das abas de mídia e curtidas, você verá uma nova aba. Crie um pacote inicial a partir do seu perfil. Crie um pacote inicial. Use nossa ferramenta de geração automática para criar um pacote inicial ou faça o seu próprio do zero! Você pode criar mais de um pacote inicial. Clique em "Faça um para mim" para obter um pacote inicial pré-preenchido com usuários e feeds personalizados sugeridos. Você pode adicionar ou remover itens desta lista. Ou clique em "Criar" para adicionar usuários e feeds ao seu pacote inicial você mesmo. Defina o nome, a descrição e os usuários e feeds recomendados do seu pacote inicial. Compartilhe seu pacote inicial! Cada pacote inicial vem com um link e um código QR que você pode compartilhar. Envie seu pacote inicial por mensagem para um amigo, compartilhe com sua rede profissional e poste em outros apps sociais! Compartilhe seu pacote inicial! Diga olá! Você será notificado sobre os usuários que se juntarem ao Bluesky através do seu pacote inicial. Quem pode usar os pacotes iniciais?

Qualquer pessoa com uma conta no Bluesky pode criar pacotes iniciais.

Se você ainda não tem uma conta no Bluesky, pode se juntar através do pacote inicial de um amigo e começar com as personalizações recomendadas por ele. Assim que estiver no Bluesky, você pode adicionar/remover essas recomendações e personalizar ainda mais sua experiência.

Se você já está no Bluesky mas quer se integrar a outra comunidade ou obter as recomendações de seu amigo, você também pode usar o pacote inicial dele para adicionar à sua experiência!

FAQ sobre Pacotes Iniciais

Quantas pessoas e feeds posso adicionar ao meu pacote inicial?

Você pode recomendar até 150 pessoas e até 3 feeds personalizados. Novos usuários terão automaticamente os feeds Seguindo e Descobrir fixados.

Como posso compartilhar meu pacote inicial com mais pessoas?

Envie um link por mensagem para seus amigos, poste sobre ele em outras redes sociais, compartilhe com sua rede profissional! Cada pacote inicial vem com uma imagem de prévia gerada automaticamente que mostra o nome do seu pacote inicial e alguns usuários sugeridos para facilitar o compartilhamento.

Como encontro mais pacotes iniciais no Bluesky?

Você pode compartilhar pacotes iniciais diretamente no Bluesky, e verá uma prévia incorporada para esses links. Atualmente, os pacotes iniciais não aparecem na busca, então para encontrar um pacote inicial, um amigo terá que lhe enviar o link ou você poderá ver a prévia incorporada dentro do app do Bluesky.

Fui adicionado como usuário recomendado no pacote inicial de alguém. Posso me remover?

Quando você bloqueia o criador de um pacote inicial, você será filtrado e removido do pacote inicial dele. Você também pode denunciar um pacote inicial para a equipe de moderação do Bluesky (veja abaixo).

Posso denunciar um pacote inicial para a equipe de moderação do Bluesky?

Sim. Você pode denunciar um pacote inicial clicando no menu de três pontos no topo do pacote inicial. A equipe de moderação do Bluesky revisará todas as denúncias e as avaliará de acordo com nossas Diretrizes da Comunidade.

Posso incluir um serviço de rotulagem no meu pacote inicial?

Atualmente, não incluímos serviços de rotulagem nos pacotes iniciais — estamos trabalhando primeiro na melhoria da descoberta desses serviços no app e na confiabilidade dos serviços.

Wednesday, 28. August 2024

Matterium

BEYOND THE OUROBOROS — Finite and Infinite Crypto

Posting on X, Ethereum founder Vitalik Buterin recently expressed his concerns about the chain’s current use case, saying, “This worries me. Because it feels like an ouroboros: the value of crypto tokens is that you can use them to earn yield which is paid for by… people trading crypto tokens”. Famously, the ouroboros is the image of a snake eating its own tail, found in cultures across the world

Posting on X, Ethereum founder Vitalik Buterin recently expressed his concerns about the chain’s current use case, saying, “This worries me. Because it feels like an ouroboros: the value of crypto tokens is that you can use them to earn yield which is paid for by… people trading crypto tokens”. Famously, the ouroboros is the image of a snake eating its own tail, found in cultures across the world from ancient times, and Vitalik has hit the nail on the head here, yes, crypto does just eat itself.

Finite Crypto is — Token trading. A one dimensional, zero sum game where anyone making money does so through someone losing money, not through creating real value. It is just shifting money about. This has only a limited lifetime before capital moves on Infinite Crypto is — Opening up crypto to real world uses. Multi-dimensional, innovative, flexible, forward looking. A non-zero-sum game. Where money is made by creating real world utility that generates true value. This has unlimited potential.

Currently what “crypto” means to most people is a finite, one-dimensional, zero sum game that is just about token trading; any “yield” a token seller gets comes at the expense of another token buyer losing money. The money just goes round in circles, crypto is not generating any new value, it’s moving value from one person to another and relies on new money coming to the market to keep making it possible for existing token holders to cash out. As with a casino, the only winner in the end is the house; whatever someone does, those gas fees still have to be paid. It is all very finite and constrained. Crypto only works because of the dollar’s weakness as it does not suffer from inflation like the dollar does, so crypto buyers try to use it as a hedge against inflation.

Ethereum has the potential to create so much more — Infinite Crypto, but isn’t really being used for anything innovative now, it’s not generating value in any real sense. Token trading is simply a way to move dollars about — token buyers spend their dollars on token, token goes up, maybe token goes down, and someone, somewhere, gains some value, then cashes their tokens out into dollars to spend it in the real world (paying those gas fees on the way). Even when token trading is done in a hundred percent legal way, it is still just moving money from losers to winners, it all just goes round in a circle and doesn’t grow — finite. At the moment, growth in crypto is mostly an illusion, it gets bigger because more retail investors put their savings in, not because crypto does something useful that increases value.

All this was neatly encapsulated, weirdly enough, by a scholar of religion named James P. Carse. He said “There are at least two kinds of games: finite and infinite” and defined them in this way: “A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play”. Currently crypto is a finite game, but crypto needs to become an infinite game, with evolving rules and boundaries, where the purpose is to keep things going and continue to create new value in as many ways as is possible. We are done with the old crypto — Infinite Crypto awaits, free of the shackles and constraints of the finite token game and open to the multiplicity of reality.

Vitalik understands this better than most and realises its implications saying, “while defi might be great it’s fundamentally capped and can’t be the thing that brings crypto to another 10–100X adoption burst.” Crypto has been around long enough that most people who feel at home with the token market as it is have already bought into it; there may be an incremental growth in numbers perhaps, but not the 10–100x step change that Vitalik sees the potential for. He can, though, see where that’s coming from — “I would love to see a story for where the yield is coming from…that’s rooted in something external”. The next step for Ethereum lies in connecting to infinite possibilities of the real world, in other words.

Crypto as it stands is playing the finite game, Infinite Crypto, is where we need to take things next, breaking out of the current doom loop of finite crypto. Infinite Crypto is where the growth is, that’s what will make sustained money for everyone. If we fail to break out of the doom loop, the capital will eventually go elsewhere and the blockchain will end up like Second Life (do any of you even remember Second Life? Second Life was the future once, long, long ago), a niche digital world with almost no impact on real life. Finite games always end, they become stagnant, innovation stops, they die.

But this is not what Vitalik and the team created Ethereum for; it was created for Infinite Crypto, it started with a vision of transforming the entire world, but it has become limited and massively inward facing, all about those finite zero sum games. You can play the casino game just as happily with Bitcoin as you can with Ethereum, if you really want to, but Vitalik and his team built Ethereum for smart contracts, and the real world is built on contracts. Find a way to enable Ethereum to streamline real world contracts through smart contracts and it starts to generate actual yield, yield for potentially everyone involved, not yield produced by taking money from losers to give to winners (and on a pretty random basis at that), there’s an infinity of opportunity for the taking.

A conservative estimate suggests that there’s half a TRILLION dollars to be gained by enabling efficiency savings in international trade and business, the kind of efficiency savings that Ethereum is eminently well equipped to provide — the International Chamber of Commerce reckon there’s $280 billion in things like import and export deals, currently encumbered with telephone book thick paper documentation (yup, they still print it all out and cart it around), then there’s $100 billion from the deregulation of US real estate commissions that open them up to innovative ways of dealing with property contracts and all that associated paperwork, not to mention real estate in the rest of the world. On top of that there’s likely to be well over a hundred billion in other savings here and there, so half a trillion is probably on the conservative side. Then there’s value-added services in the real world that could use Ethereum — it can deliver proven, valid, data for AI based searches on real estate that prevents the AI from hallucinating, for example. If there is any doubt about its veracity, the data can be checked back to the blockchain and verified.

This is all business that could be transacted over Ethereum, business with real, actual, yield, the kind of yield Vitalik means here, and it is pretty much infinite. Vitalik is reasserting the original vision, he is reminding us of the way, that this was the future once.

Ethereum has had its playpen stage, where idealistic utopians dreamed of a financial system untethered from the state and from tax, and has seen that largely swept away by ruthless speculators and, yes, outright scammers, who have turned the whole space into a dog-eat-dog wilderness (with RFK Jr we’ve seen what happens to your reputation when it’s alleged you eat dogs…. ). Now though, with Vitalik’s lead here, it’s time to grow up and grow out, to connect Ethereum to stuff that generates yield all round and use the business world to drive that 10–100x adoption that Ethereum is ripe for, Infinite Crypto. Right now, crypto risks just stalling out and senescing, it is basically on life support from people sacrificing their futures to buy a bunch of worthless shit coins, and lending on crypto assets is just a way of building up leveraged positions and instruments that have no economic fundamentals — all that technology could be doing mortgages instead; business, with actual yield.

We have every opportunity to make Ethereum economically productive in the real world without breaking the law. The future for the blockchain has never been brighter, but that future is only accessible after the scamming stops, we break out of the loop and attain the infinite.

We need to get back to the original Ethereum vision

This is good news for me. After being the Ethereum launch coordinator in 2015, I set up Mattereum in 2017 to achieve that future. Since then, we’ve been working on laying the foundations, putting the tools in place to enable Ethereum to interact effectively with the real world. We’ve sorted out the lawtech so we can make smart contracts enact real world contracts that are legally binding, and backed them with warranties that work under the 1958 New York Convention on Arbitration, so they stand up in court in any of 170 countries. We have the tools that connect Ethereum to the physical world, the tools that can be used to bring those efficiencies to world trade, that enable novel, creative business solutions to use Ethereum.

Vitalik has given us the direction, we have built the tools — together we can uncoil the snake, Infinite Crypto is within reach.

BEYOND THE OUROBOROS — Finite and Infinite Crypto was originally published in Mattereum - Humanizing the Singularity on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

What you need to know about Mobile Driver’s Licenses

The post What you need to know about Mobile Driver’s Licenses appeared first on Indicio.
A Mobile Driver’s License (mDLs) is a digital specification for a physical driver’s license. Given that driver’s licenses are widely used for identification, it’s likely that a digital version would enjoy similar ubiquity online. Here, we look at what exactly they are (are they verifiable credentials?) their benefits, and why they are not currently widely available.

By Tim Spring

It all starts with the International Organization for Standardization (ISO) 18013 series. In a nutshell, this document creates a common standard for international recognition of a digital driver’s license. 

The standard lays out the scope as follows:

You must use a machine to obtain the mDL.  The mDL must be tied to the mDL holder.  You must be able to authenticate the origin of the mDL data. You must be able to verify the integrity of the mDL data.

Critically, there are two things the standard does not cover:

How the holder’s consent to share their data is obtained.

Any requirements on how the mDL data is stored.

So now we know what the mDL is: it is a driver’s license that can be stored on your mobile device and is tied to you. It can be proven to be as accurate as a physical card because we can prove that it was issued by a proper authority — such as the department of motor vehicles — and prove that the integrity of the data has not been compromised.

But an mDL is not the same as a verifiable credential because the mDL data can technically be stored in a siloed database. However, a verifiable credential, which allows a person to hold their data, could absolutely fit this standard and be used to easily issue mDLs, as they meet all the other requirements laid out above. 

The benefits 

The benefit to using mDLs is similar to the benefits of using verifiable credentials. They are simple to verify and use, convenient, and often more secure than a physical document.

There are guides written on how to spot a fake ID. This is because each state has their own methods for trying to make their driver’s licenses difficult to counterfeit. An mDL offers a much simpler way to verify the identity of a person or their age for eligibility to purchase goods: all you need to do is scan the QR code and the software will tell you. You don’t need a flashlight, or to look for holograms. 

Most people also now have a mobile device that is always with them. Carrying a digital version of your driver’s license allows you to not worry about accidentally leaving your ID somewhere or needing to fish through a bag to find it, it is always at your fingertips.

Lastly, the security features of these mDLs, especially if they are created through verifiable credentials, are hard to match. If the mDL is a verifiable credential, it is essentially immune to forgery because the software can cryptographically verify the origin of the data, and there is an additional layer of security from the data being stored on the holder’s mobile device instead of a centralized database, removing the risk from data breaches. 

Why are these mDLs not commonplace?

One of the reasons why these credentials have not yet been widely adopted is that regulations have not kept up with the technology.

In the US, the REAL ID act of 2005 wasn’t updated until the end of 2020 to include permission for digital and mobile drivers licenses. But the federal government leaves the issuing of driver’s licenses to each state, meaning that the state governments also have to vote on implementation; as of August 2024, only 13 have passed legislation to start issuing mDLs. 

If they are being issued by your state they are not currently a replacement for your license, but an additional way to represent it, meaning that you will likely still have a physical license somewhere. This could be another reason that many haven’t adopted them, they see it as an add-on that is unnecessary.

It’s important to remember that this technology is still new. Many people might not understand or trust it yet, but as the world shifts to be more digital, it will be a big part of how we prove our identity moving forward. 

If you are part of an organization looking into mDL technology, or a better way to prove your identity online, Indicio can help! Get in touch with our team of experts today.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post What you need to know about Mobile Driver’s Licenses appeared first on Indicio.


Ontology

Why Elon Musk’s Support for California’s AI Bill Highlights the Need for Decentralization

As AI becomes more embedded in every aspect of our lives, the debate around California’s AI Safety Bill (SB 1047) highlights a critical issue: the risks of centralized AI control. While the bill attempts to mitigate these dangers, the real solution lies in decentralization — distributing control and ensuring that AI systems align with human values, privacy, and security. The Risks of Centralized

As AI becomes more embedded in every aspect of our lives, the debate around California’s AI Safety Bill (SB 1047) highlights a critical issue: the risks of centralized AI control. While the bill attempts to mitigate these dangers, the real solution lies in decentralization — distributing control and ensuring that AI systems align with human values, privacy, and security.

The Risks of Centralized AI

Centralized AI systems, controlled by a few powerful entities, pose significant dangers. We’ve already seen how centralized control can lead to data misuse, biased algorithms, and even AI-driven censorship. When a handful of corporations dictate the direction of AI development, the risks of abuse and manipulation skyrocket. For example, if a single entity controls the data and algorithms behind AI-driven surveillance, the potential for privacy violations and authoritarian control becomes disturbingly real.

Decentralization isn’t a buzzword; it’s the backbone of a system we can trust. Unlike centralized models that concentrate power, decentralization spreads control across a network, making it nearly impossible for any one actor to manipulate or exploit the system. Decentralized identity (DID) systems, for instance, enable individuals to maintain ownership of their digital identities. This ensures that interactions with AI are grounded in verified, user-controlled data — without the risk of breaches or exploitation by a centralized authority.

The Role of Decentralized Identity and Privacy

DIDs, like those powered by Ontology’s ONT ID, are a cornerstone of decentralized AI. In a world where AI might drive everything from financial transactions to governance, ensuring that human values and rights are upheld is critical. Decentralized systems provide a framework where proofs of identity, timestamped transactions, and zero-knowledge proofs can be securely integrated, preventing AI from being hijacked by non-human interests.

Moreover, privacy must be a cornerstone of AI development. Today’s centralized AI models often rely on vast amounts of personal data, raising serious concerns about surveillance and misuse. Decentralized approaches, powered by technologies like zero-knowledge proofs, allow for the validation of data without compromising privacy. This ensures that AI systems remain transparent and accountable, free from the risks of censorship or manipulation.

Global Context and the Future of AI Regulation

California’s AI Safety Bill is part of a growing global trend toward regulating AI. The European Union’s AI Act, for instance, introduces strict guidelines on the use of AI in high-risk areas, but it doesn’t take effect until 2025. Meanwhile, China’s approach to AI regulation is more focused on controlling and harnessing AI for state objectives, often at the expense of individual freedoms. In this landscape, decentralization offers a way to protect innovation while ensuring that AI development remains aligned with democratic values.

By contrast, decentralized AI frameworks ensure that no single entity holds too much power over these systems. They offer a pathway to develop AI technologies that are resilient, transparent, and aligned with public interests. This approach could prevent the kind of monopolistic practices that have plagued the tech industry for years, while fostering innovation in a way that centralized models cannot.

Conclusion: A Call for Decentralized Solutions

The California bill may mean well, but by doubling down on centralization, it misses the mark. We don’t need more gatekeepers; we need systems that empower individuals, protect privacy, and resist censorship. Decentralization isn’t just a technical fix; it’s a moral imperative for the AI-driven world we’re hurtling toward.As discussions around AI regulation continue, it’s clear that decentralization isn’t just a technical choice — it’s a fundamental necessity. By embracing decentralized technologies, we can build AI systems that are not only safe and trustworthy but also aligned with the principles of self-sovereignty and privacy. At Ontology, we’re committed to leading this charge, creating the frameworks that will ensure AI serves humanity — not the other way around.

Read more Ontology snippets here: https://ont.io/news/1086/The-Telegram-CEOs-Arrest-Highlights-the-Urgent-Need-for-Decentralization-and-Privacy-Protections

Why Elon Musk’s Support for California’s AI Bill Highlights the Need for Decentralization was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Indicio

DNP Launches Platform for Building Decentralized ID-based Digital Credential Issue and Verification System

DNP The post DNP Launches Platform for Building Decentralized ID-based Digital Credential Issue and Verification System appeared first on Indicio.

Trinsic Podcast: Future of ID

Karyl Fowler - From Transmute to Global Trade and the Power of Digital Identity

In this episode, I sit down with Karyl Fowler, co-founder and CEO of Transmute, a company at the forefront of integrating modern identity technology into global trade. Before founding Transmute, Karyl's work in the semiconductor and bioelectronics industries provided her with unique insights into the complexities of global supply chains. We explore a variety of topics, including: The challenges

In this episode, I sit down with Karyl Fowler, co-founder and CEO of Transmute, a company at the forefront of integrating modern identity technology into global trade. Before founding Transmute, Karyl's work in the semiconductor and bioelectronics industries provided her with unique insights into the complexities of global supply chains.

We explore a variety of topics, including:

The challenges of digitizing trade documentation and how Transmute is solving the multi-billion dollar paper problem The evolution of decentralized identity and its application to physical goods and cross-border commerce Key lessons learned from working with regulators and how Transmute has navigated the highly regulated trade industry

Karyl offers valuable perspectives on the future of trade and digital identity, making this an episode you won't want to miss!

You can learn more about Transmute on their website: transmute.industries.

Subscribe to our weekly newsletter for more announcements related to the future of identity at trinsic.id/podcast

Reach out to Riley (@rileyphughes) and Trinsic (@trinsic_id) on Twitter. We’d love to hear from you.


DHIWay

Dhiway makes the Finternet possible

The BIS Working Papers No. 1178 (PDF), authored by Agustín Carstens and Nandan Nilekani, introduces Finternet: the financial system for the future, which holds immense potential for the financial sector and promises a brighter future. The paper outlines a way to unlock the potential within the financial sector by enabling an architecture that draws on […] The post Dhiway makes the Finternet poss

The BIS Working Papers No. 1178 (PDF), authored by Agustín Carstens and Nandan Nilekani, introduces Finternet: the financial system for the future, which holds immense potential for the financial sector and promises a brighter future.

The paper outlines a way to unlock the potential within the financial sector by enabling an architecture that draws on the Internet, decentralization, and unbundling. Dhiway is one of the small core group of companies working on developing the concepts in the paper into a functioning system. With us on this journey are Silence Laboratories, JUSPAY, Rooba Finance, and the Solana Foundation.

At the outset there is an exposition about vision for the Finternet: multiple financial ecosystems interconnected with each other, much like the internet, designed to empower individuals and businesses by placing them at the centre of their financial lives. It advocates for a user-centric approach that lowers barriers between financial services and systems, thus promoting access for all.

The tokenization of real-world assets is an integral component of the finternet. With tokenization comes the need for a well-designed governance system built on regulatory frameworks with which the technology choices are compliant. If you still need to become familiar with asset tokenization, here is a primer written by Suraj Atreya, which is necessary reading material.

Blockchain technology is a key piece of the technology infrastructure, and it is where CORD, our Open Trust Infrastructure, fits in to enable the design of innovative applications and solutions.

The emergence of finternet is not just the blueprint for information technology architecture. It is conceptualised to unbundle the traditional, centralized financial systems using the values of innovation, transparency, enhanced security, cost efficiency and interoperability, all while being very user-centric.

Find more about the work underway at Finternetlab.io

The post Dhiway makes the Finternet possible appeared first on Dhiway.


Samagra and Dhiway come together to build a developer community for CORD.

Samagra Development Associates Private Ltd (“Samagra”), engaged in implementing Code for GovTech (C4GT) to build and sustain developer communities, has joined hands with Dhiway, a leading provider of enterprise Web 3.0 open trust infrastructure, to create communities of innovation around the open-source Layer 1 blockchain framework CORD. Dhiway and Samagra will offer structured mentorship and […]

Samagra Development Associates Private Ltd (“Samagra”), engaged in implementing Code for GovTech (C4GT) to build and sustain developer communities, has joined hands with Dhiway, a leading provider of enterprise Web 3.0 open trust infrastructure, to create communities of innovation around the open-source Layer 1 blockchain framework CORD.

Dhiway and Samagra will offer structured mentorship and outreach engagement programmes for community members to build innovative solutions to solve complex nation-scale challenges using the CORD blockchain.

This partnership will also foster engagement with industry stakeholders, government agencies and regulatory bodies to help build awareness and engagement around Open Trust Infrastructure.

Nitin Kashyap, Senior Vice President and Head of Product at Samagra stated, “India is making remarkable strides in building DPGs and DPI. As we set new benchmarks, it becomes crucial to ensure the adoption, maintenance, and sustainability of DPGs and open-source technology for the public good. Achieving population-scale impact requires a comprehensive, whole-of-system approach. Through initiatives like C4GT, we aim to unite organizations and contributors to drive this mission as a community. Our collaboration with Dhiway marks a significant step forward in strengthening this community.”

K P Pradeep, CSO at Dhiway, emphasized, “Today it is critical that developers acquire the habit, discipline and knowledge for building at scale using the CORD Blockchain framework. The multiplier effect of open standards, open source software, open protocols, and open trust infrastructure will unlock the potential to solve challenges for India and the world. Samagra’s focus on enabling DPGs that fit within a DPI complements our vision of reshaping the digital future.”

About Samagra

Samagra is a mission-driven governance consulting firm, that works exclusively with governments to transform governance. This involves working with the senior political and bureaucratic leadership of states and the Centre on deep systemic reforms, leveraging tech & data, to strengthen the state’s capacity to deliver sustainable outcomes at scale across domains like education, agriculture, skilling, employment, health and public service delivery among others.

About Dhiway

Dhiway is a trust infrastructure company reshaping the digital future through population-scale technology solutions. We enable enterprises and government agencies to address key challenges around data stores, data exchange and data assurance through the CORD Blockchain – a Layer 1 enterprise blockchain technology.

The post Samagra and Dhiway come together to build a developer community for CORD. appeared first on Dhiway.


Integra and Dhiway Partner Up to Expand Verifiable Credentialing

Integra Micro Systems Pvt Ltd (“Integra”), a leading provider of advanced technology products and solutions across sectors such as BFSI, Telecom, Government, Retail/eCommerce, Enterprise, and Airlines, has announced a strategic partnership with Dhiway, a pioneer in enterprise Web 3.0 open trust infrastructure. This collaboration aims to revolutionize the business of verifiable credentialing and dr

Integra Micro Systems Pvt Ltd (“Integra”), a leading provider of advanced technology products and solutions across sectors such as BFSI, Telecom, Government, Retail/eCommerce, Enterprise, and Airlines, has announced a strategic partnership with Dhiway, a pioneer in enterprise Web 3.0 open trust infrastructure. This collaboration aims to revolutionize the business of verifiable credentialing and drive forward application modernization efforts.

Integra’s expertise in Product and Tech Stack Development, Identity Authentication, IT Infrastructure Modernization, Application Modernization, Enterprise Automation, IT/Network Automation, Zero-Trust Architecture, Bot-AI-ML, DevSecOps, and Systems Integration will be instrumental in this joint initiative. By integrating Dhiway’s state-of-the-art Web 3.0 infrastructure, the partnership will enhance the deployment and scalability of digital credentials, streamline automation processes, and modernize infrastructure to effectively manage and verify digital trust and security. This synergy seeks to expand the acceptance network for verifiable credentials, ensuring that modern applications and systems are equipped to handle and secure digital records efficiently.

Mahesh Jain, Managing Director at Integra, stated: “Our partnership with Dhiway marks a significant step forward in our mission to modernize and secure digital ecosystems. By leveraging Dhiway’s cutting-edge Web 3.0 infrastructure, we are poised to transform the landscape of verifiable credentialing. Additionally, we intend to extend our Wallet software to support CBDC, NFTs, and Crypto, utilizing Dhiway’s robust blockchain technology. This collaboration not only enhances our capabilities in application modernization and digital trust but also aligns with our commitment to driving innovation and efficiency across industries. Together, we are setting new standards for digital identity management and trust infrastructure, paving the way for a more secure and reliable digital future.”

Satish Mohan, CEO at Dhiway, emphasized: “We are excited to welcome Integra into the Dhiway ecosystem. Our Open Trust Infrastructure, built on the foundation of Web 3.0 and state-of-the-art cryptography, has revolutionised how organisations secure and exchange data with continuous assurance. This partnership with Integra reinforces our commitment to advancing digital trust, especially within the financial sector. Together, we are poised to redefine the standards for secure and transparent digital ecosystems, delivering unparalleled value to our customers.” 

About Integra Micro Systems Pvt Ltd

Founded in 1982, Integra Micro Systems Pvt Ltd is a leader in innovative solutions for the Government, BFSI, and Telecom sectors. The company has a rich history of pioneering advancements, including being the first to port UNIX on Indian hardware, transitioning to Linux in the mid-90s, and developing the WAP stack for handheld devices. In 2007, Integra introduced the MicroATM device, revolutionizing financial inclusion in India and laying the groundwork for Aadhaar-based payment systems. Today, Integra excels in Digital Transformation, offering solutions in Enterprise Automation, Infra Modernization, Software Development, Systems Integration, AI/ML-based analytics, and advanced digital identity management, driving efficiency and progress across various industries.

About Dhiway

Dhiway is a trust infrastructure company reshaping the digital future through population-scale technology solutions. We enable enterprises and government agencies to address key challenges around data stores, data exchange, and data assurance through the CORD Blockchain – a Layer 1 enterprise blockchain technology.



The post Integra and Dhiway Partner Up to Expand Verifiable Credentialing appeared first on Dhiway.


Caribou Digital

Conjuring innovation: Tech pilots as products

A recent Forbes article claimed ‘Blockchain makes cash-based humanitarian aid secure, fast and transparent’. But how do aid professionals actually experience it? Are these claims truly being fulfilled? What impact does blockchain innovation have for organisations in practice? My latest research article (Conjuring a Blockchain Pilot: Ignorance and Innovation in Humanitarian Aid) lifts the bonne

A recent Forbes article claimed ‘Blockchain makes cash-based humanitarian aid secure, fast and transparent’.

But how do aid professionals actually experience it?
Are these claims truly being fulfilled?
What impact does blockchain innovation have for organisations in practice?

My latest research article (Conjuring a Blockchain Pilot: Ignorance and Innovation in Humanitarian Aid) lifts the bonnet on humanitarian innovation. Based on ethnographic research in Jordan, I explore what is at stake when an aid organisation experimentally applies a blockchain pilot project in refugee camps.

This innovation, I suggest, comes with a mix of genuine promise, authentic expertise, but also blind faith and strategic ignorance.

Tech pilots aren’t just designed to help people: regardless of what they achieve, they are valuable products for aid industry actors to promote.

The Blockchain Pilot

The Blockchain Pilot was introduced to replace the traditional cash-in-hand system with a blockchain-based digital wallet, integrated with biometric iris recognition. This system aimed to improve the security, speed, and transparency of aid payments while significantly reducing costs by bypassing conventional financial intermediaries. It also promised to empower Syrian refugee women by providing them with independently held digital wallets. However, a key appeal of the pilot was its potential to attract funding and boost the organisation’s reputation among donors.

How conjuring works: Ignorance in innovation

In the paper I argue that The Blockchain Pilot was ‘conjured’ as a product to be promoted to a competitive marketplace of aid donors. In social studies of capitalist markets, ‘conjurings’ are the spectacles and magical appearances that draw an audience of investors. I suggest that conjurings are not just about appearance and show. They involve key forms of ignorance: (i) confusion, (ii) illusion, (iii) disappearance, and (iv) misdirection.

i. Confusion
Aid professionals involved in the pilot expressed confusion about blockchain. Despite being expected to represent and defend the pilot, most staff had little understanding of how blockchain operated. This confusion was not unique to this organisation. The universal mystification surrounding blockchain made promotional claims about it difficult to evaluate or refute.

ii. Illusion
Blockchain was often treated as a magic technological object capable of achieving a range of desirable effects without clear explanation. Aid professionals conflated blockchain with other features of automation or digitalisation which did not actually require blockchain. ‘Digital wallet’ was a misnomer: refugees could not access the balance and transactions record on a personal device; they could not credit money, only withdraw it; they did not have custody of the wallet, the aid organisation did.

iii. Disappearance
The hierarchical design of the system meant that aid workers did not have access to the blockchain ledger. This design reinforced existing power asymmetries within the organisation and disconnected them from valuable information. Aid workers disappeared from the aid delivery process, replaced by the private companies and biometric cameras.

iv. Misdirection
Promoting The Blockchain Pilot often involved diverting attention away from its negative impacts on people. Aid organisations focused on quantitative metrics like cost-effectiveness and transaction speed, while downplaying the social and practical challenges faced by the refugees and aid workers.

Ignorance is not an insulting term denoting simply the absence of knowledge. It is actively produced, it can be both strategic and inadvertent, and it is shaped by hierarchical power relations and neoliberal business models in aid. The politics of ignorance is therefore something we need to take seriously when we analyse organisations and technological change.

This study is not just a cautionary tale for practitioners in aid. Beyond refugee camps and beyond blockchain, the conjuring of innovation products can take precedence over delivering meaningful value to the people they enrol.

Conjuring innovation: Tech pilots as products was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Dock

A Deeper Look at Credential Monetization and Ecosystem Payments

In our 2023 Masterclass on Reusable Digital Identity, we explained how verifiable credentials simplify organizations’ processes and improve customers’ experience by making it easy to reuse trusted identity data across business partners. This led us to focus our 2024 Roadmap on creating tools to simplify the management of

In our 2023 Masterclass on Reusable Digital Identity, we explained how verifiable credentials simplify organizations’ processes and improve customers’ experience by making it easy to reuse trusted identity data across business partners. This led us to focus our 2024 Roadmap on creating tools to simplify the management of digital identity ecosystems. With the help of our early adopters who provided valuable feedback, Dock Certs now contains simple to use tools for managing the trust relationships in a custom ecosystem.

Full article: https://dock.io/post/a-deeper-look-at-credential-monetization-and-ecosystem-payments


BlueSky

New Anti-Toxicity Features on Bluesky

Trust and Safety (T&S) affects everything — from community policy and spam detection, all the way to the order that replies show up on a post. At Bluesky, the product team works hand-in-hand with T&S to design features that balance safety, ease of use, and fun.

We are publishing a series of blog posts on Trust & Safety efforts at Bluesky. This is the first in the series.

Trust and Safety (T&S) affects everything — from community policy and spam detection, all the way to the order that replies show up on a post. At Bluesky, the product team works hand-in-hand with T&S to design features that balance safety, ease of use, and fun.

In this blog, we’re taking a look at specifically toxicity (harassment, dunking, etc.) and some steps we’re taking to mitigate it from the product perspective. Be sure to update your app to the latest version (1.90) to access many of these features!

Detaching quote posts

As of the latest app version, released today (version 1.90), users can view all the quote posts on a given post. Paired with that, you can detach your original post from someone’s quote post.

This helps you maintain control over a thread you started, ideally limiting dog-piling and other forms of harassment. On the other hand, quote posts are often used to correct misinformation too. To address this, we’re leaning into labeling services and hoping to integrate a Community Notes-like feature in the future.

Note: Like blocks, quote post removals are public data. The Bluesky app won’t list all the quote post removals directly on your post, but developers with knowledge of the Bluesky API will be able to access this data.

Detaching the original post from a quote post. Hiding replies

In app version 1.90, you can now hide replies on your post. Only the original creator of the thread can hide replies. All hidden replies will be placed behind a Hidden replies screen — so they’re still accessible, but much less visible.

Note: Hidden replies – and which posts were hidden by the author – are still public data.

How to hide a reply. Priority notification filters

If you navigate to Notifications and click the Settings cog in the top right corner, you can now manage your notifications more. With the priority notifications feature, you can filter your notifications to only receive updates from people you follow. We hope this is helpful for people with large followings who are always receiving an influx of notifications, and also for people who may not have expected that their post would get so much attention.

We’ll keep tuning this feature and adding additional options for notifications.

Find the priority notifications filter setting in the Notifications tab. Changes to how replies show in timelines

Historically, in the Bluesky app, we show every reply in the Following feed. This means that every reply has the same visibility as a top-level post, which is often not a user’s intention. We’re reducing the frequency of showing replies in the Following feed to only show conversations that involve replies between at least two people you follow.

Additionally, this update should make it much easier for you to update older threads. Now, when you reply to an older thread of yours, it’ll get bumped to the top of your followers’ feeds. (You’ll no longer have to repost your own reply to surface it to your followers.) This update also prevents replies from being separated from the top-level post, making them easier to understand.

How replies are now displayed. Applying blocks to lists

Bluesky has three kinds of lists: starter packs, curational user lists, and moderation lists.

Now, when you block the creator of a starter pack or a curational user list, you’ll be filtered out of any lists they create. (Blocks still have no effect on moderation lists, because that would defeat their purpose.)

Additionally, we’re updating our policies around acceptable list titles and descriptions and will be labeling lists more aggressively. We’ll share more on this in a blog post next week from the Trust & Safety team.

Future work

Product work, especially as it relates to Trust & Safety, is always a continuous effort. We’re also making some updates on our backend infrastructure to combat ban evasion, botnets, and other forms of toxicity.

We’ll be publishing an update next week from the Trust & Safety team on some of these efforts.


TBD

Open Standards at TBD

How TBD is leveraging open standards

At TBD, we are committed to building a decentralized future where users have greater control over their data and organizations can interact in a more open, trustworthy, and secure way. Open standards are the foundation of this vision, enabling the seamless collaboration and interoperability across systems.

Everything we do at TBD is enabled and strengthened by open standards. Our most notable projects, Web5 and tbDEX, are deeply rooted in these open standards. The frameworks for decentralized identifiers (DIDs), verifiable credentials (VCs), and the protocols that facilitate their sharing form the backbone of our work.

Our Approach to Open Standards

Open standards ensure that different systems and organizations can work together seamlessly, creating a cohesive environment where data and identity can move across personal and organizational boundaries.

At TBD, we are deeply involved in several key standards bodies to ensure that the standards we rely on are robust and interoperable:

Decentralized Identity Foundation (DIF): This organization serves as an incubator for new ideas and standards related to decentralized identity. We are actively contributing to several key initiatives here, such as decentralized web nodes and trust establishment protocols.

W3C: The World Wide Web Consortium (W3C) is the authority on web standards, and we are heavily involved in their work on DIDs and VCs. W3C’s role in defining these standards is crucial for ensuring their broad adoption across the web.

OpenID Foundation: We’re also working with the OpenID Foundation to integrate their standards with VCs and DIDs. This work is focused on extending OpenID’s capabilities beyond web-based applications, making them applicable in backend services and mobile environments.

One of our main tasks is ensuring that our software aligns with these standards. Our Web5 spec and tbDEX spec are prime examples of adopting existing specifications to meet our broad interoperability needs.

Current Focus Areas

Our ongoing work in the standards space is focused on several key areas:

Interoperability: We’ve defined an interoperability profile for tbDEX, which outlines the standards we’re using and how they interact. This is a starting point for enabling seamless exchanges on the tbDEX network.

Selective Disclosure: As we look to enhance user privacy and control, we’re exploring the use of selective disclosure credentials. This allows users to selectively share the information necessary for a specific interaction, rather than their entire credential.

Trust Frameworks: We’re also working on establishing a trust framework that will enable different organizations to agree on legal and compliant ways to trust one another. This is particularly important for interactions on the tbDEX network, where trust is paramount.

Looking Ahead

As we advance our projects, we remain focused on refining our specifications to ensure they are well-defined, thoroughly tested, and widely adopted. This includes ongoing work on the Web5 spec, which we are continuously improving with better test vectors and more robust compliance checks.

We’re also making significant strides with our Rust Core approach, which will form the basis for many of our SDKs. This effort will allow us to support multiple languages more efficiently and ensure greater consistency across our implementations.

The work we’re doing now is laying the groundwork for a decentralized future where users have more control over their data, and organizations can interact in a more open, trustworthy, and secure way. As we move forward, our commitment to open standards will remain at the heart of everything we do.

Get Involved

If you're working on implementing Verifiable Credentials (VCs) or Decentralized Identifiers (DIDs), please reach out!

Join our Discord community for direct access to our team and ongoing discussions. You can also find us on Twitter @TBDevs.

We look forward to your contributions and questions!

Tuesday, 27. August 2024

Finicity

New report: Building the future of bill payments 

In today’s rapidly evolving digital landscape, consumer preferences and expectations are reshaping the way we engage with financial transactions. Choice lies at the heart of consumers’ financial lives, including how… The post New report: Building the future of bill payments  appeared first on Finicity.

In today’s rapidly evolving digital landscape, consumer preferences and expectations are reshaping the way we engage with financial transactions. Choice lies at the heart of consumers’ financial lives, including how they pay their bills — from traditional methods like checks and cards to emerging technologies like account-to-account payments.  

To understand how consumers prefer to pay their bills and why, and how they want to do so in the future, Mastercard surveyed over 2,000 consumers across the U.S. We explored the evolving landscape of consumer payment preferences, focusing specifically on the intersection of choice, convenience, and security, and how these core tenets will shape the future of bill payment.  

Explore some of the highlights of the report below or download the full report here

An overview of bill payments and preferences  

Consumers are looking for a seamless, efficient, secure way to pay their everyday expenses. The research shows that they are consistently turning to credit and debit cards, as well as options where they can pay directly from their bank accounts, like Bill Pay and ACH/e-check options.   

The most often used payment method for recurring bills is topped by credit cards at 47% followed by bill pay features through banks at 41%, debit card at 39% and ACH at 37%.  

Looking forward, respondents are inclined towards similar payment methods for future recurring bills, with credit cards and bill-pay-by-bank features leading the way. This trend underscores the reliability and trust needed for recurring expenses. 

Get all the insights by downloading the full report

Consumers are driven by choice  

Consumers want three fundamental things in their payment experiences: choice, convenience, and security, and they want payment solutions that empower these elements.   

Placing high value on having choice and flexibility in payment methods when paying their bills, an overwhelming number of respondents expect businesses to provide multiple payment options, indicating a strong demand for variety in how they pay.    

However, only 51% of respondents feel they are frequently given the opportunity to choose their preferred payment method. This suggests a sizable gap in businesses meeting these expectations consistently.  

Convenience, cost and security pave the way for open banking  

Based on the data, there is a clear opportunity for more businesses to embrace new kinds of payment methods supported by open banking technology.  

These new methods use consumer-permissioned connections to bank accounts for payment data rather than having the consumer input their card or account and routing numbers.  

The majority of consumers, across all age groups, are open to new pay-by-bank methods that would save billers money and reduce the likelihood of non-sufficient fund returns – as well as offering security, convenience, and support for consumers to manage their finances.  

Download the bill payments report to learn more about how open banking increases choice in bill payments for consumers and businesses, or head over to our open banking blog for inspirational use cases and insights. 

The post New report: Building the future of bill payments  appeared first on Finicity.


TBD on Dev.to

How Web5 and Bluesky are Building the Next Layer of the Web - A Comparative Analysis

As companies increasingly commodify our personal data and privacy breaches make headlines, many technologists are creating user-centered frameworks that empower individuals to control their digital identities and personal information. This concept, known as Self-Sovereign Identity (SSI), enables users to decide what data they share and with whom. While blockchain technology is a popular choice for

As companies increasingly commodify our personal data and privacy breaches make headlines, many technologists are creating user-centered frameworks that empower individuals to control their digital identities and personal information. This concept, known as Self-Sovereign Identity (SSI), enables users to decide what data they share and with whom. While blockchain technology is a popular choice for implementing SSI, companies like TBD are exploring (and even creating) alternative technologies to achieve these goals.

My Perspective on the State of SSI

Our efforts at TBD are part of a larger movement. In fact, there’s a consortium of tech giants and startups working together through the Decentralized Identity Foundation to establish open standards and best practices for SSI, focusing on:

Digital Identity Interoperability Data Ownership Reliable digital verification methods

The SSI industry is making tangible progress, especially in government sectors, as our technological solutions support the advent of Mobile Driver's Licenses.

However, one of my concerns with our industry is every company is implementing varied proprietary methods. Despite aiming to solve similar problems, companies are developing their own unique DID methods, wallets, and tooling. This fragmentation raises questions for me:

Can we achieve widespread adoption with disparate systems? Will the multitude of competing mechanisms overwhelm both users and developers? Will our various systems eventually work in tandem?

In November 2023, I began investigating the answers to these questions through a livestream series where I interviewed SSI experts from different companies. After conducting approximately 30 interviews, these questions remain unanswered. However, I’ve gained more in-depth knowledge about:

Key players in the SSI space Various technical approaches to implementing SSI Real-world applications of SSI Interviewing Bluesky

I most recently interviewed Dan Abramov, creator of Redux and React core team member, about his work at Bluesky and the development of Bluesky's underlying technology – Authenticated Transfer Protocol, or AT Proto for short. I learned that while TBD’s Web5 and Bluesky’s AT Proto share the vision of a decentralized and user-centric web, their approaches and underlying technologies offer a fascinating contrast. I'll examine these parallel approaches in hopes that TBD, Bluesky, and the broader community can gain valuable insights into building infrastructure for the decentralized web.

Building the Next Layer of the Web Similarities

The web as we know it today consists of physical, network, transport, application, and data layers. Instead of replacing the existing architecture altogether, AT Proto and Web5 aim to add a new layer enabling data to exist beyond individual applications. Both provide tools for developers to build apps within their respective ecosystems.

Bluesky actually serves as a reference implementation to inspire developers and showcase AT Proto's potential.

Differences

AT Proto focuses on decentralized social media, while Web5 enables developers to build any type of application, from financial tools to social media to health management. For example, I developed a fertility tracking app during a hackathon to demonstrate personal health data ownership. Additionally, at TBD, we use components of the Web5 SDK to build the tbDEX SDK, an open financial protocol that can move value anywhere around the world more efficiently and cost-effectively than traditional financial systems.

Data Portability Similarities

A common frustration with traditional web applications is that users often lose access to their data when a platform shuts down. Even if a user can export their data—say as a CSV file—it becomes static, no longer live or interactive.This data is essentially lost for most users, especially non-technical ones, as it's difficult to rebuild the ecosystem that once surrounded it. For example, moving from one social media app to another means users lose their followers, viral posts, and reputation and have to start from scratch.

Web5 and AT Proto enable users to take their data from one application to another. For example, if a user leaves Bluesky, which operates on AT Proto, they can migrate their data to another AT Proto-compatible app without losing their social connections or posts. Similarly, if an app built with Web5 were to shut down, a user could bring their data to another Web5 app.

Differences

Data portability on these platforms varies due to different data management approaches. AT Proto uses a federated model where each app operates a Personal Data Server (PDS). The PDS, typically managed by the app provider, stores all user data in a repository tied to the user’s identity. Users can move their repository—containing posts, social graphs, and more—between apps within the AT Proto ecosystem by connecting it to another PDS.

In contrast, Web5 depends on Decentralized Web Nodes (DWNs), which are personal data stores fully controlled by the user. To switch apps, users point the new application to their DWN and specify the types of data users of the app can access.

Use of W3C Standards for Authentication Similarities

Both AT Proto and Web5 leverage the W3C standard called Decentralized Identifiers (DIDs), which are globally unique alphanumeric identifiers that can move with you across different applications. This enables users to maintain their identities consistently across platforms.

While DIDs are often associated with blockchain technology, both Web5 and AT Proto implement a blockchain-less approach. For instance, Bluesky uses a custom DID method called did:plc (DID Placeholder), while Web5 employs did:dht (DID Distributed Hash Table), which anchors DIDs on BitTorrent instead of a blockchain. Learn more about TBD’s DID method here.

Differences

Many developers have told me that the way AT Proto handles authentication is what attracted them to the Bluesky, but many of them don’t even realize that they’re using DIDs under the hood. On Bluesky, users can use one of their existing domain names as their username. Bluesky verifies ownership by performing a DNS lookup to make sure the domain belongs to the user. Once verified, the domain is linked to a DID, and the user is marked as verified on their account.

Web5 also uses DIDs for authentication but in a different way. DIDs eliminate the need for usernames and passwords. Instead, you can log in directly with your DID. This is possible because, in the Web5 ecosystem, every DID has cryptographic keys that securely prove ownership.

Permission Management Similarities

Both AT Proto and Web5 offer permission management systems, but there are key differences in who can manage these permissions.

Differences

AT Proto takes an application-centric approach to permission management. Permissions are defined by applications using schemas called lexicons, which dictate the rules that the PDS follows. As a result, the extent of control users have over their data depends on the permissions set by the application.

Permission management is where Web5 shines. Users define access controls through JSON schemas called Protocols, specifying who can access specific data stored in their DWN. This is why building a fertility tracking app with Web5 was ideal for me: I could explicitly deny social media apps, marketing platforms, and retailers access to my personal health data, while allowing only my healthcare provider and partner to access it.

Special URLs for Data Access Similarities

Most web users are familiar with URLs, which serve as web addresses to retrieve data online. Similarly, AT Proto and Web5 use their specialized URLs to access data within their ecosystems.

Differences

In AT Proto, special URLs start with the prefix at:// and point to data in a user's PDS.

Example: at://alice.com/app.bsky.feed.post/1234 might reference a specific post in a user's social media feed.

In Web5, Decentralized Resource Locators (DRLs) start with the prefix https://dweb and link to data stored in a DWN.

Example: https://dweb/${did}/read/records/${recordId} allows a user to fetch a specific record from a DWN.

Learn More

While I've described some core differences between Web5 and AT Proto, there are more interesting features to explore, including how Bluesky implements algorithmic choice, how Web5 uses W3C's Verifiable Credentials to prove digital identity, and how both platforms refer to individual data pieces as "records." These topics deserve their own deep dives. For now, I encourage you to continue exploring via:

🎥 Watch: My interview with Dan Abramov explaining Bluesky’s implementation

📚 Learn: Check out my SSI expert interview series called tbdTV

🤝 Join: Build with us and join our discussions on Discord.


Spruce Systems

Meet the SpruceID Team: Bryce Einck

If you're a SpruceID client, you may know Bryce! Get to know one of our incredible Technical Success Managers.
Name: Bryce Einck
Team: Product Delivery
Based in: San Diego, CA About Bryce

I began my journey in customer service as a technician at the Apple Genius Bar, where I honed my troubleshooting and customer service skills. From there, I moved into technical operations and integration support for a healthcare all-in-one practice growth solution, where I expanded my expertise by learning PHP and working with IDEs for integration troubleshooting. I then transitioned to a Customer Success Manager and Product Deployment role at a tech startup focused on providing AI customer support solutions for e-commerce brands. In these positions, I gained experience with product deployment, Javascript, and consulting on using AI in customer service.

After a brief gap in work, I was looking for something new. I was excited to become a Technical Success Manager at SpruceID because the technology and privacy surrounding digital identity seemed challenging and important for our future.

Can you tell us about your role at SpruceID?

At SpruceID, I handle the day-to-day between Spruce and the California DMV, manage the priorities and expectations of SpruceID's deliverables, provide technical troubleshooting for any arising issues, and facilitate.

What do you find most rewarding about your job?

I enjoy being part of a process that improves and contributes features to the California DMV Wallet mobile application that benefits the digital identity community. It is fun to be on the edge of new tech and new tech that has yet to be fully standardized.

What has been the most memorable moment for you at SpruceID so far?

The opportunity to travel to Brazil, meet the team, explore new food/culture, and mix local drinks. I also love to surf, and had the opportunity to surf in Brazil as well!

How do you define success in your role, and how do you measure it?

Success in my role is achieved by positively managing expectations and delivering on what is asked for and promised. Success also means supporting my team in any way I can. Measuring success can be hard to define at a startup due to the constantly changing landscape, so I measure it by consistently delivering a high-quality product.

What is your favorite part about working at SpruceID?

I find the team incredibly smart, fun, and supportive!

Fun Facts

What do you enjoy doing in your free time? I enjoy being outdoors, but to stay active, surfing and bouldering are my go-tos year-round. All my other free time is spent with my family and friends, playing overcompetitive card/board games, and cooking.

If you could be any tree, what tree would you be and why? I would choose to be a Redwood tree. I grew up surrounded by them and have always loved how large they get, their ability to grow together in angel rings as a support system, and their fire-resistant qualities.

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

NIS2 - EU Network and Information Security Directive

by Martin Kuppinger NIS2, the revised EU Network and Information Security Directive (EU 2022/2555) entered into force on January 16th, 2023. EU member states are obliged to transfer the directive into national law by October 17th, 2024. NIS2 mandates organizations to strengthen their cybersecurity posture and have proper incident handling and reporting in place. It also extends the scope very sign

by Martin Kuppinger

NIS2, the revised EU Network and Information Security Directive (EU 2022/2555) entered into force on January 16th, 2023. EU member states are obliged to transfer the directive into national law by October 17th, 2024. NIS2 mandates organizations to strengthen their cybersecurity posture and have proper incident handling and reporting in place. It also extends the scope very significantly, affecting an estimated 160,000 organizations within the EU. Thus, organizations must understand where to focus their cybersecurity investments to be prepared for NIS2.

Enhancing Security Frameworks through Zero Trust and Identity Threat Detection and Response (ITDR)

by Paul Fisher In a world that is becoming increasingly digital, it is crucial to have strong security frameworks in place. The shift towards cloud computing, remote work, and digital transformation has expanded the attack surface for organizations, making traditional security models insufficient. This KuppingerCole White Paper explores the integration of Zero Trust principles and Identity Threat

by Paul Fisher

In a world that is becoming increasingly digital, it is crucial to have strong security frameworks in place. The shift towards cloud computing, remote work, and digital transformation has expanded the attack surface for organizations, making traditional security models insufficient. This KuppingerCole White Paper explores the integration of Zero Trust principles and Identity Threat Detection and Response (ITDR) to enhance security frameworks, providing a proactive and comprehensive approach to safeguarding digital assets.

MyDEX

What we do: Identity as a Service

This blog is fourth in a series explaining how Mydex’s personal data infrastructure works. It explains how our platforms help deliver our mission of empowering individuals with their own data: how it enables them to use this data to manage their lives better and assert their human rights in a practical way on a daily basis. Blogs in this series are: What IS a Personal Data Store?

This blog is fourth in a series explaining how Mydex’s personal data infrastructure works. It explains how our platforms help deliver our mission of empowering individuals with their own data: how it enables them to use this data to manage their lives better and assert their human rights in a practical way on a daily basis.

Blogs in this series are:

What IS a Personal Data Store? Personal Data Stores and Data Sharing Connecting ‘data about me’ to the world around me Identity as a Service

Thirty years ago, when the Internet was still a new thing, a joke started doing the rounds. “On the internet,” it said, “nobody knows you’re a dog”.

It was a flippant comment but it was also amazingly prescient. This issue of knowing who the other person is at the end of the line, has continued to dog the provision of digital services ever since.

When you see a friend or family member in the street you can recognise them instantly. In that instant, your brain processes dozens of cues relating to their facial features and expressions, their voice, size and weight, gait, mannerisms and gestures, so that you ‘just know’ it’s them. It does these things so fast and accurately that it seems incredibly simple. But it is not, as robotics and AI practitioners have discovered to their cost over many decades.

None of the cues that our brains process so brilliantly are available when you deal with another person remotely, online. Hence that early Internet joke.

For a society and economy that does more and more things online, this is incredibly important. It’s not just about fraud, though that is a big and ever-present danger. It’s also about simple practicality, efficiency and quality. If people and organisations want to do business with each other online, they need to be able to recognise one another. The whole issue of online or ‘digital identity’ is a sine qua non of all online service provision: without being able to recognise people when they sign up to and use an online service it’s impossible for that services to operate.

Mydex personal data stores are helping to solve this problem, in two ways.

Two meanings of ‘identity’

Before we go any further, there’s one big source of confusion that we need to address. In the context of online interactions and transactions the term ‘digital identity’ is commonly used to mean two very different things. In many conversations and debates, people move seamlessly from one of these meanings and back again without even realising they’re doing it. The result is endless confusion.

One of these meanings is knowing (or at least being pretty confident) that the person (or organisation) that you are dealing with is who they say they are. This is the whole area of identity assurance (sometimes called identity verification). Like all those cues of sight, sound and behaviour that we use to recognise our friends and family, this can involve gathering quite a lot of information about the person and ‘binding’ it to them. So, for example, if you know their name and address and age and that they have this passport number and that driving licence number and so on, the more bits of information you have about them, the more confident you can be that they are who they say they are.

The second meaning of identity is more mundane and administrative, but perhaps even more important. It’s about simply recognising them when they turn up at your front door — when they log in to a website or app for example. This, we call identity authentication.

The two may be connected. For example, a bank might go through a process of identity assurance when first providing them a customer with a bank account. At this stage the bank needs to have lots of details about who the person is. But once that process is complete, all the bank needs to do is recognise that customer when they return to use the service by, for example, use of a username and password and/or other authentication steps. This is the identity authentication bit.

On the other hand, identity assurance and identity authentication might not be connected at all. With some types of service, say when you are subscribing to a newsletter, the service provider doesn’t really need to know who the person is at all. All they need to know is if it’s the same person returning to use that service. In this case, the person could just as well use an invented name such as Mickey Mouse, along with a password like M-Mouse and it wouldn’t really matter. The service could still operate.

Once the ‘relying party’ (the party using the authentication) knows that the person is using the same identifiers, they can then map their activities, records, specific preferences etc to that individual, for their use of the service, without necessarily knowing who they actually are.

Mydex’s role in identity

Mydex’s personal data store infrastructure makes a fundamental contribution to both types of identity challenge. By enabling individuals to amass large quantities of verified attributes (sometimes referred to as verified credentials) about themselves, and to share these verified attributes easily, quickly and safely, our personal data stores go a long way to solving the problem of identity assurance and verification, without the need for privacy invading processes such as ‘identity cards’. You can see more detail about what we do on this front here.

However, the focus of this blog is on the second, practical, administrative matter of identity authentication — what all of us have to do many times a day when logging in to different types of online service.

Here, the current state of play is … a complete mess.

It grew into this mess quite naturally. First off, in the very earliest days of online services, service providers had to recognise customers when they logged in, used and returned to the service. So they invented the username and password.

It’s a pretty neat solution, except for one thing. Every different organisation created its own bespoke process for recognising people when they use a service, requiring individuals to invent (and remember) hundreds or perhaps thousands of different usernames and passwords. (Or, for the sake of convenience, they could use just one username and password, in which case if they ever got hacked the hacker would have access to every single service they had ever used).

This organisation-centric ‘bespoke solution’ to identity authentication multiplied costs and complexity for both people and service providers many times over. Most service providers had no desire to be in ‘the username and password business’ but took it on simply because they had to. It was a cost of doing business.

Then, monopolist digital platforms like Google and Facebook spotted a market opportunity. “If you log in to our service we can use the credentials we have created for you to log you on to other services!” In this way, individuals didn’t have to remember hundreds and different usernames and passwords, and service providers could get out of having to manage their username and password business. How convenient! Social sign-in was born.

On the surface, it looked like an ideal win-win. But there was only a drawback to this ‘solution’ and it is an ABSOLUTELY HUGE drawback. It delivers privacy ‘bleed’ on a gargantuan scale. By letting the digital monopolists provide ‘social sign-in’ services, individuals effectively give them permission to track their movements across their entire internet, gathering data about everything they do online — all to further concentrate power and profits in the hands of these monopolists.

Social sign-in is one of today’s volcano issues and scandals, just waiting to blow up as and when people begin to realise just how deeply invasive and pervasive and exploitative it is — all to escape the inconveniences and costs created by the first faulty attempts to solve the identity authentication problem in an organisation-centric way.

Where Mydex fits in

With Mydex’s Identity (authentication) as a Service (IDaaS) the core idea of social sign-in (e.g. only having to log in once to access many different services) is still achieved but without any privacy bleed. In fact, the goal of a single log-in is achieved while enhancing individuals’ rights and control.

It works like this. When an individual gets their personal data store they set up a username and password by which Mydex can recognise them when they log-in (i.e. no different to any other service provider). They have this for life. Then, once the individual is logged in to Mydex they can use Mydex’s connections with other services that are connected to Mydex to automatically log in to those services too.

This means that individuals can flow from one service to another without ever having to log in to these other services — because all the handshakes are working for them, automatically, behind the scenes, not getting in the way of what they are trying to do.

But this time, there is no data surveillance. Mydex is not tracking the individual anywhere. It is not collecting any information about where they go or what they do online. It is simply using the fact that it has established a secure connection with another service to open a gate and let the individual through, if and when they want to pass through that particular gate (i.e. to that particular service).

Service providers can still minimise their involvement in the username and password service but with an added benefit that, in using Mydex’s IDaaS they are not handing over oodles of data about their customers to Silicon Valley digital monopolies. Any data generated by the transaction or interaction goes into just one of two places: into relying parties’ own systems or into the individual’s personal data store. Never to a third party, including Mydex. That’s because Mydex cannot see any of the data that goes into the individual’s personal data store as explained here.

The result is that both sides benefit from both convenience and efficiency and added safety. Why added safety?

Originally, identity authentication systems were established by organisations to protect their own digital front doors. They were designed to protect the safety of the organisation, not the individual. The Mydex approach is designed to help individuals protect their digital front doors. It’s about empowering citizens with agency; with the information services they need to make their way efficiently and effectively within a complex world of service provision.

Because data about interactions is stored in the individual’s PDS, every time the Mydex ID is used it creates a log which the individual can inspect. For example, it could alert them to the fact that somebody has tried to use their ID to log-in to a service. In this way, the individual gets an audit trail of every use of their Mydex ID. This information is held in their PDS for their use alone, away from prying eyes — information that is NOT handed over to the likes of Google or Facebook.

Just to emphasise: This is data that Mydex itself cannot access because each individual has their own private encryption key to their own PDS. This means that while Mydex holds the data (in encrypted form) in its systems it cannot actually ‘see’ its content.

Extra added value

The above provides a simple summary of Mydex’s Identity as a Service model. But there is more to this simple service than meets the eye.

First, individuals can increase the security of their interaction if they want to, by adding in extra layers of security. They can, for example, require a ‘multifactor authentication process’ whereby an additional piece of information is used to authenticate their identity. This could be a one time code sent to their phone, an email, or from an authenticator app.

Second, The individual can also add other identifiers like email addresses and mobile numbers to their MydexID to protect them from use by anyone else. Registering multiple email addresses and mobile numbers also allows the individual to select any of these alongside their core MydexID itself to login, because they are all linked together. This delivers greater security and protection and also overcomes those issues where people lose access to an email or mobile number. Now they always have back-up routes for accessing their MydexID and linked services.

Third, individuals can set preferences about where notifications may be sent to them, for example a specific email address, a mobile number, or both. Each person has different ways they prefer to get notifications. This gives them the ability to make that choice independently of any relying party (service provider).

This is NOT about giving service providers the power to create hoops for individuals to jump through. It’s about enabling individuals to add extra layers of security if and when they feel they need to. It’s about putting the individual in control.

Fourth, there may be occasions when an individual wishes to log in to a service provider (such as a researcher or survey outfit) where they share information about themselves but want to do so anonymously. They can use their Mydex ID to do this. This is because, along with the Mydex ID comes what we call a ‘universal unique identifier’ (UUID) which hides their Mydex ID and contact details from the service provider.

This UUID acts like a wrapper that hides what is inside. It provides the same guarantees as those provided by the username and password but without actually providing these actual identifiers. It can be used by the service provider to recognise that it is the same person returning to the service without actually knowing who that person is.

This enables researchers who want to participate and work with someone over a period of time to see changes in their behaviours/life without actually knowing who they are. And it enables individuals to participate in such research, safely and securely.

Fifth, the system allows identity authentication to work ‘in reverse’ where, if they have already signed in to a service that’s connected to the Mydex IDaaS, individuals can use the fact that they have logged in to this service to also log in to their personal data store (PDS). There, they can add and update data and manage their preferences, including things like adding more Multi Factor Authentication Options and approving connections between their PDS and subscribers adding data.

Further Benefits

Service providers further benefit in a number of ways. As well as not having to operate their own username and password business, they can use the Mydex ID to connect to the individual’s personal data store (if the individual wants them to connect). This opens the door to safe, secure, permissioned, two way data sharing.

For example, if the individual already holds a profile about themselves in their PDS — a profile containing data usually held in a service provider’s ‘My Account’ functionality — then the individual can simply click a button to provide that information to the service provider. No more having to fill in online forms!

This makes the process of onboarding onto a new service much easier, quicker and safer, especially for smaller organisations.

Service providers can also trigger multi-factor authentication processes if they require it — as do most banks for example. In particularly sensitive situations, it is also possible to create unique identities that only work for that particular transaction and cannot be reused once that transaction has been completed.

Conclusion

Thirty years ago, it was a joke that people didn’t know who they were dealing with when interacting online. Today, it’s no longer a joke. It’s a massive cost and hassle for millions of people and organisations alike. These costs and inconveniences are being gamed and abused to an absurd extent by both frausters and monopolists.

But there are ways to solve this problem safely and efficiently. And Mydex has found a way to do just that.

What we do: Identity as a Service was originally published in Mydex on Medium, where people are continuing the conversation by highlighting and responding to this story.

Monday, 26. August 2024

Ontology

The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections

​​The recent arrest of Telegram’s CEO Pavel Durov at a Paris airport is more than just a headline; it’s a stark reminder of the escalating global crackdown on privacy-centric platforms. Durov, who has championed digital freedom, is now facing serious allegations that his platform has been used for illegal activities ranging from money laundering to child exploitation. But beneath these charges lie

​​The recent arrest of Telegram’s CEO Pavel Durov at a Paris airport is more than just a headline; it’s a stark reminder of the escalating global crackdown on privacy-centric platforms. Durov, who has championed digital freedom, is now facing serious allegations that his platform has been used for illegal activities ranging from money laundering to child exploitation. But beneath these charges lies a broader, more urgent issue — the clash between centralized control and the fundamental need for decentralization, censorship resistance, and privacy in our digital lives.

Telegram, like many centralized platforms, operates in a gray area where user privacy is at odds with government demands for access and control. This arrest underscores the vulnerabilities of centralized systems — where a single point of failure, like Durov’s arrest, can jeopardize the entire platform and its user base. The incident raises critical questions: How much control should governments have over communication platforms? And, more importantly, how can we safeguard individual privacy in an increasingly surveilled world?

Decentralized systems offer a compelling solution. Unlike traditional platforms, they are not controlled by any single entity, making them inherently resistant to censorship and external pressure. A decentralized messaging app, for example, would not have a CEO who could be arrested, nor would it have servers that could be easily seized. This structure ensures that users maintain control over their data and communications, rather than relinquishing it to a central authority.

Moreover, decentralized identity (DID) plays a crucial role in this landscape. DID allows individuals to own and control their identities across different platforms without depending on a centralized authority. This is essential in preventing the misuse of personal data and ensuring that privacy remains intact, even if one platform is compromised. In an era where governments and corporations alike are vying for more control over digital spaces, the protection offered by DID is invaluable.

The implications of Durov’s arrest go beyond Telegram. It signals the growing pressure on privacy-focused platforms and the need for a shift toward decentralization. As governments increase their grip on digital communications, the only sustainable path forward lies in systems that are beyond their reach — systems that prioritize individual autonomy, censorship resistance, and privacy. The rise of decentralized identity technologies is not just timely; it’s necessary for preserving the freedom that centralized platforms can no longer guarantee.

In conclusion, Durov’s arrest is a wake-up call. It underscores the fragility of centralized systems in the face of authoritarian pressure and the critical need for decentralized alternatives that respect and protect our privacy. As the battle over digital freedom intensifies, decentralization and decentralized identity will be key to ensuring that the internet remains a space for free and open communication, untainted by the heavy hand of censorship and control.

The Telegram CEO’s Arrest Highlights the Urgent Need for Decentralization and Privacy Protections was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.


Spherical Cow Consulting

Digital Identity in the Age of AI: Challenges and Opportunities

AI is revolutionizing digital identity, enhancing security and efficiency across various industries. Adaptive authentication, powered by AI, assesses real-time access risk, reducing cumbersome password prompts for users and bolstering security for companies. However, this reliance on AI for authentication raises privacy concerns due to extensive data access. Moreover, the use of AI for malicious p

Yes, AI is everywhere. And yes, that means it is having an impact (one that will only grow) on the digital identity space. And like most other transformative technologies, the impact will be incredibly positive … and also something to be very concerned about. Now that the paper led by OpenAI asking policymakers, technologists, and standards bodies to think about how to develop mechanisms to identify whether an entity online is a person or an AI (I had a small part in that paper), the whole AI and identity is back at the forefront of my brain.

How AI is Changing Digital Identity Security

As our online identities grow more complex, artificial intelligence (AI) is playing a bigger role in keeping them safe. Organizations use AI to spot all sorts of nefarious activities and protect personal information by analyzing patterns and catching anything out of the ordinary. (Which makes me ask, “what is ordinary and who defines it?” I’d love to have that conversation sometime over beverages.)

AI isn’t just for tech giants—industries like banking and e-commerce are using it to prevent fraud and verify identities. For example, in banking, AI can track transaction habits to flag anything unusual, potentially stopping fraud before it happens. In online shopping, AI helps confirm who you are during transactions, cutting down on the risk of identity theft.

What is Adaptive Authentication?

Adaptive authentication is changing how we verify digital identities. Instead of relying on passwords, this method uses AI to evaluate the risk of an access request in real time. It looks at factors like where the request is coming from, what device is being used, and what time it is.

This approach has big benefits. For users, it means fewer annoying password prompts. For companies, it means stronger security because the system can adjust the level of authentication needed based on the perceived risk. All good stuff, until you look at the amount of data AI must access in order to make these determinations. Privacy advocates have a lot to say about this.

The Challenges of AI in Digital Identity

So let’s talk about the privacy aspects for a moment. While AI offers new ways to secure digital identities, the ramifications when it comes to privacy are huge. AI systems need a lot of data to work effectively, and this raises questions about how that data is collected and used.

Another concern is the potential for AI to be used in malicious ways, like creating deepfakes—fake media that looks real but isn’t. This technology could be used to create false digital identities, making it harder to tell what’s real online.

The European Union’s AI Act tackles the issues of where and how AI might be used, and is the first comprehensive regulation in the world on the subject. But, being the first, there are still significant concerns about whether it is enough. The rest of the world is watching to see what works, what doesn’t, and what they can take away from the effort for their own regulations.

AI’s Role in Different Industries

AI-driven digital identity tools are being used in many sectors, each with unique challenges and applications:

Finance: AI helps detect fraud faster and more accurately by analyzing years of transaction data to spot suspicious patterns. Healthcare: Digital identity is crucial for protecting patient privacy and streamlining services. AI helps verify identities and manage access to sensitive medical records, ensuring secure and personalized care. E-commerce: Online retailers use AI to prevent identity theft by analyzing shopping patterns. AI can flag unusual transactions that may indicate fraud, protecting both the customer and the retailer.

Is there an industry that AI won’t touch? If that industry has any kind of online presence, then I’d say no, probably not.

The Global View: Working Together on AI and Digital Identity

Digital identity challenges aren’t confined to one country—they’re global. Just like when thinking about the Internet, commerce, and human migration, geopolitical boundaries are just another consideration when it comes to digital identity. I’ve already mentioned the EU’s AI Act. If you’re following this space at all, you should also be aware of the OECD’s AI Principles, initially published in 2019 and updated earlier this year (May 2024). If you’re in the US, you really need to check out the Executive Order President Biden’s administration posted in October 2023, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”

It’s always fascinating (and a little scary) when technology outpaces the law. Of course, it’s not all that great when the law outpaces technology and starts to make stuff up about what’s possible. If it wasn’t my digital identity and that of my 8 billion fellow humans, I’d heat up some popcorn and watch the demolition derby that is technology standards and regulations.

Wrap Up

So, yup, AI is having a big impact on digital identity. It’s making things safer, improving user experiences, and helping industries operate more efficiently. But with these benefits come challenges, especially around privacy and security.

For tech leaders, you kind of don’t have a choice. Your organization needs to get involved in shaping AI-driven digital identity solutions. By adopting these technologies now AND following the principles that exist to make it safe for your employees and customers, you will improve your organization’s security and efficiency. If you don’t, the hackers of the world will thank you.

And if you’re an individual contributor like me, stay on top of the tech news for the latest in security recommended practices. Look for any open calls for comments on the standards and principles that impact this space.

Of course, if you’d like to outsource paying attention to all this and get someone to write a monthly report on the latest, reach out to me, and we’ll see what’s possible.

The post Digital Identity in the Age of AI: Challenges and Opportunities appeared first on Spherical Cow Consulting.


Ontology

Unleash Your Inner Ontonaut with OntoNex Level

Are you ready to take your journey with Ontology to the next level? Introducing the OntoNex Level Program — our latest initiative designed to reward you for being an active part of the Ontology community. Whether you’re a conversation starter, network builder, or community guardian, there’s a role for you to shine and earn rewards along the way. What’s OntoNex Level All About? The Onto

Are you ready to take your journey with Ontology to the next level? Introducing the OntoNex Level Program — our latest initiative designed to reward you for being an active part of the Ontology community. Whether you’re a conversation starter, network builder, or community guardian, there’s a role for you to shine and earn rewards along the way.

What’s OntoNex Level All About?

The OntoNex Level Program is more than just a rewards system; it’s a pathway for you to maximize your potential within the Ontology ecosystem. Each role is tailored to match your strengths and passions, allowing you to contribute meaningfully and earn coins that can be redeemed for exclusive rewards.

The Roles: Chatster: Energize the community with engaging conversations. Unlock achievements and earn coins with every message. Inviter: Grow our network by inviting new members. Earn 10 coins for each successful invite. Guard: Maintain a safe and welcoming environment by reporting spam. Earn 10 coins for every spam report. Helper: Share your Ontology knowledge by assisting others. Earn 10 coins for each helpful interaction. Campaigner: Participate in various community campaigns and events. Earn 5 coins for every event you join. Level Up and Unlock Exclusive Rewards

As you accumulate coins, you can redeem them for special rewards:

100 coins: Buy a Loyal NFT Plus. 2000 coins: Unlock the ‘Monthly NFT Receiver’ role, and receive an NFT every month. 5000 coins: Unlock the ‘Weekly NFT Receiver’ role, and receive an NFT every week. Track Your Progress

Stay on top of your achievements with these simple commands:

/achievement: See your progress in completing achievements. /coins: Check your current coin balance. /buy: Purchase items with your coins. /item: View the items you already own. Join Us on Discord!

Ready to dive in? The best way to get started is by joining our Discord community, where you can take on your role, engage with fellow Ontonauts, and start earning rewards today. Click here to join our Discord.

Unleash Your Inner Ontonaut with OntoNex Level was originally published in OntologyNetwork on Medium, where people are continuing the conversation by highlighting and responding to this story.

Sunday, 25. August 2024

KuppingerCole

WAF, WAAP, What? The Evolution of Web Application Firewalls

What makes a Web Application Firewall (WAF) a Web Application and API Protection (WAAP) solution? How is the landscape of the market changing and does every organization need a WAAP solution? Tune in to this episode of the Analyst Chat with guest Osman Celik and host Matthias Reinwarth to learn more. Dive deeper into the topic

What makes a Web Application Firewall (WAF) a Web Application and API Protection (WAAP) solution? How is the landscape of the market changing and does every organization need a WAAP solution? Tune in to this episode of the Analyst Chat with guest Osman Celik and host Matthias Reinwarth to learn more.

Dive deeper into the topic



Friday, 23. August 2024

Elliptic

OFAC targets Russian war effort with 400 sanctions, identifying a crypto address connected to KB Vostok

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against nearly 400 individuals and entities whose products and services enable Russia to sustain its war effort and evade sanctions.  Amongst those sanctioned today is KB Vostok (A.K.A. Vostok Design Bureau) a drone manufacturer which specialises in the “development of industrial-grade unmanned ae

The US Treasury’s Office of Foreign Assets Control (OFAC) has today issued sanctions against nearly 400 individuals and entities whose products and services enable Russia to sustain its war effort and evade sanctions. 

Amongst those sanctioned today is KB Vostok (A.K.A. Vostok Design Bureau) a drone manufacturer which specialises in the “development of industrial-grade unmanned aerial vehicles”. 


Dock

ISO 18013-5 Standard: What It Is And How It Works

With the growing adoption of digital identity initiatives, it has become more complex to ensure security, interoperability, and compliance, requiring adherence to rigid and evolving standards. This is where ISO 18013-5 comes into play, offering a standardized approach to secure and verify digital identities. It's

With the growing adoption of digital identity initiatives, it has become more complex to ensure security, interoperability, and compliance, requiring adherence to rigid and evolving standards.

This is where ISO 18013-5 comes into play, offering a standardized approach to secure and verify digital identities. It's the backbone of mobile driver’s licenses (mDL) implementations, providing guidelines that enhance trust and facilitate verification processes.

In this post, we'll explore ISO 18013-5, covering its definition, benefits for governments, businesses, and individuals, and development history.

Full article: https://www.dock.io/post/iso-18013-5


KuppingerCole

The Anatomy of Cyber Resilience

by Osman Celik In today's business landscape, cyber resilience is crucial for an organization's ability to sustain operations and deliver desired outcomes in the face of cyber threats or incidents. Cyber resilience encompasses not only the prevention and protection against cyber threats but also the ability to detect, respond to, and recover from them effectively. While often confused with cybers

by Osman Celik

In today's business landscape, cyber resilience is crucial for an organization's ability to sustain operations and deliver desired outcomes in the face of cyber threats or incidents. Cyber resilience encompasses not only the prevention and protection against cyber threats but also the ability to detect, respond to, and recover from them effectively. While often confused with cybersecurity, cyber resilience serves a distinct purpose within an organization's risk management strategy.

Cybersecurity vs. Cyber Resilience

Cybersecurity primarily focuses on protecting systems, networks, and data from unauthorized access. This is achieved through mechanisms such as firewalls, encryption, detection and response systems, and identity and access management. In contrast, cyber resilience goes a step further by ensuring business operations continue during and after a cyber incident. While cybersecurity aims to prevent incidents, cyber resilience assumes that breaches may occur and emphasizes maintaining business continuity and facilitating swift recovery.

The Inevitable Future with AI

As AI continues to integrate into our daily lives, it is inevitable that it will play a significant role in maintaining business continuity. However, this development presents both opportunities and challenges. On one hand, AI-powered tools enhance cyber resilience by improving detection and response times, as well as predicting and mitigating potential vulnerabilities. These technologies enable more sophisticated automation and reduce the impact of human error. On the other hand, AI also introduces new risks, as attackers leverage the same technologies to develop more advanced and sophisticated attacks.

Developing Cyber Resilience Strategies

Creating effective cyber resilience strategies involves thorough risk assessment, proactive planning, and continuous improvement. Organizations must begin by identifying their critical assets and assessing potential threats to understand their specific cyber threat landscape. With this information, they can establish a tailored cyber resilience framework.

A robust cyber resilience framework typically includes preventive measures like regular security updates and employee training, alongside incident detection and response protocols. Building resilience also requires regularly testing recovery and backup plans. Organizations should adapt their strategies based on lessons learned from past incidents and anticipate future challenges, which requires expertise, skill, and informed predictions.

Key Components of Cyber Resilience

Cyber resilience provides organizations with clear guidelines on restoring operations after a cyber incident. This involves well-defined recovery plans that are regularly tested and updated to address emerging vulnerabilities. Identifying critical systems and data is a priority, allowing organizations to focus their recovery efforts where they are needed most.

A cornerstone of cyber resilience is data backup. Without a reliable backup, a recovery plan is essentially ineffective. Backup strategies should be integrated into the broader resilience framework, with backups regularly updated and securely stored in multiple locations to protect against cyber threats. The emphasis is not just on creating backups but also on ensuring the ability to quickly access and restore data from these backups without compromising security or operational continuity.

Choosing the Right Frameworks for Your Cyber Resilience Strategy

When developing a cyber resilience strategy, organizations should consider key frameworks. The NIST (National Institute of Standards and Technology) Cybersecurity Framework offers a well-established approach with its six pillars: Identify, Protect, Detect, Respond, Recover, and Govern. Additionally, regulations such as DORA (Digital Operational Resilience Act) and NIS2 (Network and Information Systems Directive 2) should be reviewed, particularly by organizations operating within the European Union, to ensure that backup and recovery strategies are compliant and robust.

We are back in town - cyberevolution 24

We are excited to invite you to our cyberevolution event in Frankfurt on December 3-5, 2024. We will be exploring a wide range of cybersecurity topics, with plenty of chances to chat with industry experts. Cyber resilience will be one of the big topics on the agenda. In a combined session, Mike Small will discuss “Why you need data backup and how AI can help” and Joshua Hunter will provide insights into “Focus on Cyber Resilience - Prepare, Respond, Resume”. We look forward to seeing you there and have some great discussions.


Tokeny Solutions

56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead

The post 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead appeared first on Tokeny.

Product Focus

56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead

This content is taken from the monthly Product Focus newsletter in August 2024.

Demand for onchain services is growing rapidly, with institutions increasingly moving into the space. According to recent research by Coinbase, 56% of Fortune 500 executives report that their companies are actively working on onchain projects. Tokenized assets, like T-bills, have become some of the hottest investments, with the value of tokenized US Treasury products soaring over 1,000% since the start of 2023, reaching $1.29 billion.

Crypto hedge funds and market makers are leveraging tokenized assets, such as BlackRock’s BUIDL, as collateral for trading coins and tokens, unlocking unique onchain opportunities not available offchain.

Source: Coinbase

The Challenges of Building Onchain Solutions In-House

While institutions are eager to move onchain, success hinges on control, compliance, and interoperability within open DLT infrastructures. Building these solutions in-house is risky, time-consuming, and expensive. That’s where our T-REX Engine, a suite of onchain APIs, comes in.

Why APIs Are the Backbone of Onchain Expansion

APIs are crucial because they enable seamless integration with existing systems, allowing institutions to build onchain capabilities without the need to overhaul their current infrastructure. They provide the flexibility and scalability needed to adapt to new market demands and regulatory requirements, making the transition to onchain more efficient, less risky, and faster. By offering modular, plug-and-play functionality, APIs ensure that institutions can quickly develop, deploy, and manage onchain services, keeping them ahead of the competition in a rapidly evolving market.

Introducing the T-REX Engine

The T-REX Engine is designed to empower institutions with a customizable onchain solution and a fully integrated ecosystem that ensures a plug-and-play experience. Here’s what sets T-REX apart:

Most Proven Tokenization Engine: Shaped by over 120 tokenization use cases over the past seven years, T-REX has developed 1,000+ features, leveraging the best market standards like the ERC-3643 framework. Incomparable Ecosystem: With one access point, you connect to everything you need both onchain and offchain to manage tokenized securities and cash, thanks to our comprehensive ecosystem. Banking-Grade Security: We implement banking-grade security measures, with certifications like SOC2 and a 10/10 security score from smart contract audits, to secure your onchain operations.

By integrating with your existing systems, T-REX Engine enables you to customize tokenization use cases, providing your clients with an e-commerce-like asset purchase experience, a PayPal-like asset transfer experience, and new opportunities like interoperable assets with DeFi lending apps.

Here’s a brief overview of the T-REX Engine APIs:

Identities APIs: Ensure compliance with onchain identity management by tracking ownership and enforcing eligibility rules. Assets APIs: Manage the entire lifecycle of tokenized assets, from securities, cash, to real-world assets, with streamlined, all-in-one access. Offers APIs: Once tokenized, your assets can be made available anywhere onchain with enforced compliance, Offers APIs help you manage this effortlessly, covering any kind of distribution in both primary and secondary markets.

Learn more about these APIs on our website here or reply to this email to unlock API access and start building your own onchain system today.

Subscribe Newsletter

This monthly Product Focus newsletter is designed to give you insider knowledge about the development of our products. Fill out the form below to subscribe to the newsletter.

Other Product Focus Blogs How Tokeny’s Platform Empowers Fund Administrators To Act in Onchain Finance 20 September 2024 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead 23 August 2024 The Journey to Becoming the Leading Onchain Finance Operating System 19 July 2024 Streamline On-chain Compliance: Configure and Customize Anytime 3 June 2024 Multi-Chain Tokenization Made Simple 3 May 2024 Introducing Leandexer: Simplifying Blockchain Data Interaction 3 April 2024 Breaking Down Barriers: Integrated Wallets for Tokenized Securities 1 March 2024 Tokeny’s 2024 Products: Building the Distribution Rails of the Tokenized Economy 2 February 2024 ERC-3643 Validated As The De Facto Standard For Enterprise-Ready Tokenization 29 December 2023 Introducing Multi-Party Approval for On-chain Agreements 5 December 2023 Tokenize securities with us

Our experts with decades of experience across capital markets will help you to digitize assets on the decentralized infrastructure. 

Contact us

The post 56% of Fortune 500 Are Onchain: APIs Are Your Key to Staying Ahead appeared first on Tokeny.

Thursday, 22. August 2024

Spruce Systems

Debunking Myths about the Mobile Driver's License

Learn about some of the common misconceptions when it comes to mobile driver's licenses (mDLs).

While artificial intelligence is in the spotlight, a quieter technology revolution is underway: a large-scale push to build secure digital identity systems. This is, in part, driven by verifiable digital identity being a complementary technology to AI. With AI-generated text, images, and increasingly convincing videos, having a way to verify something or someone is provably who or what they claim to be will be crucial. The heightened security of encryption-backed identity can dramatically mitigate types of fraud, hacking, and impersonation.

Building digital ID is largely a problem of coordination – getting buy-in for a novel system from everyone from legislators to major enterprises to state agencies. One early leader in contention for defining the digital ID future is a set of standards known as “mDL,” or the Mobile Drivers License – a real, state-issued credential stored on a mobile device. The mDL is just one part of the fast-growing digital identity ecosystem, but it’s being used in our pilot program with the state of California and other pilots across the United States.

You might have some preconceptions about how a driver’s license that lives on a mobile device works based on your familiarity with other digital services, such as logging in to a website. But this new generation of credentials is built much differently, using recent innovations in cryptographic digital signatures.

This makes digital credentials, like a mobile driver’s license, far more secure and private than a web-based service, among other implications. But to understand this new kind of security and privacy, you have to leave behind some old ideas.

The “Photo of a Plastic ID” Myth

A mobile driver's license (mDL) is far more than just a digital image of your physical ID. Unlike a simple photo, an mDL is embedded with cryptographic digital signatures, ensuring that the data it contains is both tamper-evident and provably authentic. This means that anyone verifying your ID, whether in person or online, can trust that the information hasn’t been altered, providing higher security and trust than a static image.

One of the key advantages of mDLs is their versatility in both physical and digital realms. Whether you're verifying your identity in person, such as at a traffic stop or an airport, or over the internet for online services, mDLs offer a seamless digital verification experience. This flexibility is something a static image on your phone just can’t offer, especially as our lives become more intertwined with digital interactions.

While a photo of your ID reveals all your personal details, a significant benefit of mDLs is the ability to share only the necessary information for a specific interaction, rather than revealing all the personal details on your driver's license. For example, if you're buying age-restricted products, the mDL can confirm your age without exposing your address or other sensitive information. This minimal disclosure feature enhances privacy and reduces the risk of identity theft.

Finally, mDLs are built on global standards like ISO/IEC 18013-5 and ISO/IEC 18013-7, which means they can be accepted across industries and borders. A photo of your ID might be accepted in some places, but it lacks the standardization needed for widespread trust and interoperability. These standards ensure that mDLs can be trusted by various entities, from law enforcement agencies to financial institutions, no matter where you are. This broad acceptance and reliability make mDLs a future-proof solution for secure identity verification in our interconnected world.

The “Phone Home” Myth

If you’re still new to the idea of the mobile driver’s license, you might assume they offer less privacy than a hard-copy ID. From bank accounts to college enrollment, we’ve become very used to proving our identity by sending a password to a remote database over the internet. Similarly, you might assume that a mobile driver’s license may require pinging back to a government agency server whenever someone wants to verify your identity. If that were how a mobile driver’s license worked, it would create yet another trail of data that could be used to track you, like many web services do today. This is known as the “phone home” problem.

To be clear, mobile driver's license programs can be implemented in that way, creating (even inadvertently) a new surveillance system. But there are ways to implement mobile driver's licenses that don't have to "phone home," - which is how we approach our implementations at SpruceID in our work with customers.

The mDL standard is ultimately a shared data format, and the systems around it can be built in many ways, but the core mDL architecture can be implemented using an entirely new kind of digital “proof” that checks the validity of an ID issuer’s digital signature locally, called “device retrieval” in the mDL specification. That means no pinging a remote server, and no risky data trail.

Instead, a mobile driver’s license (or other digital credential) is verified by a file on your device. That includes a private digital “signature” proving that it’s from the correct issuer, like the DMV. The signature corresponds to a private key held by the issuing agency that is secret, so no one but the DMV can issue DMV-signed credentials; it’s tied to your specific hardware device, so the file itself can’t be copied; and it’s cryptographically signed to your identity information, so it can’t be tampered with. 

The “Supercookies” Myth

Even if a digital identity check doesn’t create a real-time trail of digital pings over the internet, an ID check can still leave a record on the device or system of the verifier. For instance, when you buy a case of beer, the liquor store might not ping the DMV’s server – but it will probably retain a record of the verification. 

These records can be a risk to your privacy.  If a 3rd party gathers together the scattered records of your ID checks, they can create a record of some of your activities – for instance, how often you visit the liquor store. This is a widespread practice when it comes to records of your web browsing – the collated records of your online activity are known as “supercookies,” and are often used to target you with advertising.

This risk is a good example of how regulation and best practices are necessary complements to new technology – new laws, or reasonable disclosure frameworks, might be needed to ban the practice of making real-world supercookies. However, there’s also a more immediate solution: the issuers of digital credentials can impose data-deletion policies that require verifiers to delete records of identity checks. 

With a few exceptions, such as law enforcement, verifiers should be okay with deleting these records immediately, significantly reducing supercookie risk. Best of all, there are cryptographic methods for proving that the data is actually disposed of.

This is a great example of a key principle in digital credential design. The mobile driver’s license (mDL) is a data standard for digital identity, but many of the systems around that data standard can be designed in many different ways. Some ways of building an mDL system might enable or even encourage archiving data to build a “supercookie,” but systems can also be built to discourage or disallow them. 

By the same token, other digital credential standards, including SD-JWTs and W3C Verifiable Credentials, can also be deployed in ways that enable tracking. In essentially every case, no tech standard can guarantee user privacy; therefore, how the system is designed, and how that design is guided by regulations and agreements, is key.

Technology, Legislation, and Markets In Harmony

Unfortunately, the greater privacy and control enabled by encryption-based digital identity won’t just happen magically. While the technology has the potential to create a more innovative and secure system, the specific way it is built in the coming years will determine whether that potential is fulfilled. 

Many of the teams building these systems have the highest ideals, and are already working to build privacy-preserving features into their structure. But technology alone isn’t enough, in this case, or in general: technology and policy must work in concert to create the future we want.

We believe the best way to guarantee a future identity system that’s both secure and private is legislation that supports the goals of the technology. That legislation, which organizations like the ACLU are currently pushing forward, would bar abuses like surveillance using digital identity – whether for commercial purposes, or more nefarious ones.

We encourage all players in the digital identity space, and potential future users of tools like the mobile driver’s license, to participate in those legislative efforts. Done right, they will help make sure that an exciting new technology supports freedom, safety, and innovation, working together as one.

Are you interested in learning more about digital credentials such as the mobile driver’s license and how they might work for your use case? Explore our website to learn more.

Learn More

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


IdRamp

MS Entra ID: Advanced Account Recovery with Identity Verification

IdRamp has partnered with Microsoft (MS) to bring Identity Verification (IDV) to the Entra ID account recovery process. Account takeover attacks increased by 350% last year, causing nearly $13 billion in losses. The post MS Entra ID: Advanced Account Recovery with Identity Verification first appeared on Identity Verification Orchestration.

IdRamp has partnered with Microsoft (MS) to bring Identity Verification (IDV) to the Entra ID account recovery process. Account takeover attacks increased by 350% last year, causing nearly $13 billion in losses.

The post MS Entra ID: Advanced Account Recovery with Identity Verification first appeared on Identity Verification Orchestration.

Caribou Digital

Rethinking innovation funding in the age of AI

Applicants can now use generative AI to craft powerful funding proposals. What does it mean for organizations running competitive grants and innovation funds? A significant shift is underway in the ever-evolving landscape of impact investing and competitive grant-making programs. In recent years, artificial intelligence (AI) has become a buzzword in many domains, including in donor funding lands

Applicants can now use generative AI to craft powerful funding proposals.

What does it mean for organizations running competitive grants and innovation funds?

A significant shift is underway in the ever-evolving landscape of impact investing and competitive grant-making programs. In recent years, artificial intelligence (AI) has become a buzzword in many domains, including in donor funding landscapes. It is pushing funding organizations to rethink how they approach innovation funding and how to ensure the “do no harm” principle applies when delivering innovation for social, environmental, and economic impact.

At Caribou Digital, we’re keenly focused on how generative AI can impact, modulate, and drive an inclusive and ethical digital world. Large language models (LLMs), like ChatGPT, have particularly piqued our interest in our fund management work and are causing us to reflect on our approaches and practices. This blog post highlights some of these reflections.

Embracing LLMs in grant-writing: A double-edged sword

ChatGPT’s emergence has brought about three critical lessons for consideration:

1) Generative AI can break down barriers to applying for grants (like time and skill gaps)

ChatGPT and other LLMs are impressively proficient in writing grant applications. There are even some LLMs focused specifically on grant writing, like Grantable and others. The “traditional” grant application process has been a grueling task. It can be complex, time-consuming, and disempowering for applicants. It often takes senior staff away from their day-to-day duties and regularly offers no reward for their efforts. Applicants are commonly unsuccessful because they fail to clearly and effectively convey their idea, innovation, or project plan. However, new tools — from Grammarly to grant-writing LLMs — have the potential to save applicants time and money in this process. They can make grant-writing more accessible and less intimidating, as well as reduce language barriers or address accessibility issues for applicants with disabilities.

2) Generative AI makes it easier to communicate compelling ideas clearly

Encouraging AI in grant proposals can democratize idea sharing, allowing for a broader range of applicants to present their visions compellingly and coherently. AI could level the playing field for small organizations with limited or no access to experienced grant writers. Or, fund managers may see that grant applicants with disabilities and those who are neurodiverse are better able to write applications without worrying about how their dyslexia (for example) might limit their chances of funding success. So, it is Caribou Digital’s theory that a more diverse pool of applicants can now complete grant applications quickly and unlock critical funding.

3) Generative AI, if used effectively by fund managers, can encourage “unusual suspects” to apply to their grant programs

By lowering the traditional barriers to entry for grants, like time and language costs, LLMs open doors for a more diverse pool of innovators.

Here’s a case study to demonstrate how LLMs could reach “unusual suspect” innovators.

As a fund manager, Caribou Digital usually requests grant applications in a single language: English. This is mainly because we manage grants in English, so all our policies, templates, and tools for tracking require input in English. We understand this immediately creates a bias against non-native English speakers, who have to convey complex, often technical ideas in their second or third language. If innovators could apply for community-based projects in more relevant languages (e.g., Swahili, Luganda, Arabic, Bengali, etc.), would more people apply with truly exciting and/or community-based ideas? Today, even basic LLM translation services can enable small, community-based organizations to quickly submit quality applications. Hypothetically, these tools would allow us to receive applications in local dialects and engage throughout the grant period in some of those languages, even if our team doesn’t have fluency in the selected language. But we also need to be highly conscious that these bold changes to processes could also contribute new biases, as LLMs are well known to be poor advocates for generating high-quality content in non-English languages. (See, for example, this article on AI language equity issues from Rest of World.) Photo by Igor Omilaev on Unsplash How can we identify authentic talent? Why we are rethinking our practices

In the context of generative AI and grant-making, fund managers need to be acutely aware of how biases could get built into project design. Even without the widespread use of LLMs, there is almost always bias in the selection of grants. It is therefore logical to assume LLMs can exacerbate existing (or even create new) bias in grant-awarding processes.* This selective bias makes it incredibly challenging to engage with grant-making tools. It is our responsibility as fund managers to actively ensure no conscious or unconscious bias is introduced into the process.

If, for example, fund managers allow AI tool use in grant applicants, we must also invest in a rigorous evaluation of bias, perhaps even involving undercover critical colleagues as independent teams to reduce bias in application processes. By doing so, we ensure that using AI in grant-making processes does not inadvertently perpetuate existing inequities.

While AI can polish and perfect an application, it’s essential to develop mechanisms that enable fund managers to capture the authentic talent behind “artificial intelligence.” It’s time to rethink how we structure our submission practices and interfaces. We must find ways for applicants to demonstrate their authentic selves beyond the more polished face that LLMs and other AI tools can provide. This requires a fundamental shift in our approach: embracing AI where it enhances equity and inclusion while remaining vigilant against its potential to introduce new forms of bias.

At Caribou Digital, we’re committed to exploring innovative methods that allow for a more genuine representation of applicants’ potential. By doing so, we can ensure that the best ideas, no matter where they come from, have a fair chance to shine. We’re currently thinking about ways we can support genuineness in applications, such as:

Allowing applicants to provide a video application (rather than solely text-based applications). Reducing or removing the need for computer access by running an application process on WhatsApp or mobile phone, for example. Plugging into existing platforms that allow applications to be submitted from an existing profile or organizational presence (e.g., f6s or Linkedin). Working with community-based organizations who can make initial recommendations or referrals on behalf of potential grantees and omitting lengthy written applications.

We know that none of these ideas will exclude bias in grant applications and assessments (some might even exacerbate it). However, AI tools in grant-writing have highlighted the need for innovation in how we assess authenticity and potential, and it’s time to test some new and innovative approaches to assessing innovation.

Please reach out if you’d like to discuss this further.

*The perception of bias varies widely; what seems unbiased to one person may be seen differently by someone with a different background or political belief. One excellent showcase of some examples of this is the Rest of the World AI series.

Rethinking innovation funding in the age of AI was originally published in Caribou Digital on Medium, where people are continuing the conversation by highlighting and responding to this story.


Evernym

New Trends in Access Management: Embracing the Future of Security

New Trends in Access Management: Embracing the Future of Security In today’s digital world, access management... The post New Trends in Access Management: Embracing the Future of Security appeared first on Evernym.

New Trends in Access Management: Embracing the Future of Security In today’s digital world, access management is more critical than ever. Organizations are increasingly recognizing the need to protect their data and systems from unauthorized access while providing seamless user experiences. The landscape of access management is evolving rapidly, with new ...

The post New Trends in Access Management: Embracing the Future of Security appeared first on Evernym.


Ockto

Open Banking & PSD2; Over o.a. inkomensverificatie met banktransacties

Podcast Open Banking & PSD2 Over o.a. inkomensverificatie met banktransacties In deze aflevering van de Data Sharing Podcast duiken we in de wereld van Open Banking en PSD2. Open Banking stelt consumenten en bedrijven in staat om financiële gegevens te delen met derde partijen, wat nieuwe kansen biedt voor innovatie en dienstverlening binnen de financiële sector.  Dankzij O
Podcast Open Banking & PSD2
Over o.a. inkomensverificatie met banktransacties

In deze aflevering van de Data Sharing Podcast duiken we in de wereld van Open Banking en PSD2. Open Banking stelt consumenten en bedrijven in staat om financiële gegevens te delen met derde partijen, wat nieuwe kansen biedt voor innovatie en dienstverlening binnen de financiële sector. 

Dankzij Open Banking kunnen organisaties snel en nauwkeurig inkomensgegevens van potentiële klanten verifiëren, wat leidt tot efficiëntere en betrouwbaardere kredietbeoordelingen en andere (financiële) diensten.

Wednesday, 21. August 2024

Elliptic

The US stablecoin landscape: leveraging Ecosystem Monitoring to build trust

The United States policy and regulatory landscape remains in significant flux when it comes to the topic of stablecoins. 

The United States policy and regulatory landscape remains in significant flux when it comes to the topic of stablecoins. 


Lockstep

What do verifiable credentials verify?

Verifiable credentials are one of the most important elements of digital identity today. What exactly does a verifiable credential verify? And while we’re on the subject, what is a credential anyway? Let’s start with existing analogue credentials. Thanks to English, “credential” can be a verb or a noun. And the noun can take two or... The post What do verifiable credentials verify? appeared firs

Verifiable credentials are one of the most important elements of digital identity today.

What exactly does a verifiable credential verify?

And while we’re on the subject, what is a credential anyway?

Let’s start with existing analogue credentials. Thanks to English, “credential” can be a verb or a noun. And the noun can take two or three very different meanings.

Photo credit: Akbar Nemati via Pexels.

Credentialing

The noun credential usually refers to “a qualification, achievement, quality or aspect of a person’s background, especially when used to indicate their suitability for something” (Ref: Oxford Languages).

There’s a subtle implication in the everyday sense of the word: a credential is generally associated with the criteria for its particular quality and suitability.

Consider professional credentials.  A budding accountant for instance must obtain a particular degree by passing certain tests set by a university; in addition, that degree needs to be deemed suitable by a professional accounting body.

So in this sense, every credential is an abstraction which represents that the holder has satisfied certain rules. A credential has meaning and context.

As a verb, “credential” means to provide someone with credentials.  This might seem obvious, but I think it’s the more important sense of the word. A credentialing process is a formal (rules-based) sequence of events, which has usually been designed to establish the holder’s suitability to undertake specific activities. There is a tight relationship between the credentialing process and the intended use of the credential.

Examples include the onboarding of new employees, enrolment in university courses, admission to professional associations (including recognition of international qualifications), approval of journalists to attend special events such political conventions, security clearances, and nations’ citizenship requirements.

Credentialing processes are famously conservative. They are the sovereign stuff of nations, academic institutions, and professional societies. Right or wrong, professional credentials are notoriously provincial and difficult to have recognised between different jurisdictions. Credentialling bodies zealously represent communities of interest and reserve the right to set rules as they see fit.

Going from physical to digital credentials

Traditionally, many credentials have been physically manifested as cards, membership tokens and other badges, used by the holder to prove their status to other parties who need to know. These items provide a number of familiar cues to assure us that a credential is genuine, the issuer is legitimate, and the credential hasn’t been modified. Some include photographs which help to show that the credential is in the right hands when presented.

By the way, the plastic card itself is sometimes called a “credential”, but it is more useful to think of it as a carrier or container of credentials, especially as we shift from analogue to digital.

Yet in the move to digital, most credentials in the abstract sense have retained their essential meaning. For example, a government authorised Medicare provider or licenced plumber should be able to assert precisely the same authority in any of their digital workflows—nothing less and nothing more—as they do in the real world.

Credit cards as credentials

A credit card is a token which signifies that the holder is a paid-up member of a payment scheme. The principal data carried by a credit card is a specially formatted number (known as the Primary Account Number or PAN) which encodes membership of the scheme, identifying the cardholder, the scheme and the issuing bank. Note that a credit card is a container that usually carries just one credential.

Credit card numbering has remained unchanged for decades. With the introduction of electronic commerce, shoppers were able to use their card numbers online, thanks to Mail Order / Telephone Order (MOTO) rules. These has been established years before e-commerce, to allow merchants to accept plaintext card numbers in card-not-present (CNP) settings.

To combat CNP fraud, the Card Verification Code (CVC) was introduced — an additional number on the back of the credit card that would not be registered by merchants’ card imprinting machines and then vulnerable to dumpster diving identity thieves.

The CVC is a classic example of security metadata — an additional signal used to confirm the data that really matters, namely the credit card number. Credit card call centre operators had access to back-office lists of PANs and matching CVCs; if a caller could quote the CVC correctly, it was assumed they had the physical card in their hands.

Enter cryptography

Verifiable credentials (sometimes “VCs” for short) are the strongest mechanism today for asserting important personal attributes, such as driver licences, professional qualifications, vaccinations, proof of age, payment card numbers and so on. VCs are central to the next generation European Union Digital Identity (EUDI), the ISO 18013-5 standard mobile driver licences (mDLs) and the latest digital wallets.

Several new VC data structure standards are under development, including the World Wide Web Consortium (W3C) VC data model and ISO 18013-5 mdocs.

All forms of VC include the following:

information about a particular “Subject” (usually a person, also referred to as the credential holder) such as a licence number or other credential ID a name for the Subject (typically a legal name but pseudonyms are sometimes possible) the digital signature of the issuer usually a public key of the Subject (used to verify signed presentations of the VC made from a cryptographic container or wallet) metadata about the credential (such as its validity period and the type of container it is carried in) and metadata about the issuer (such as a company legal name, corporate registration number, Ts&Cs for credential usage etc.).

The digital signature of the issuer preserves the provenance of a verifiable credential: anyone relying on the VC can be assured of its origin and be confident that the credential details have not been altered.

When a VC is presented from a cryptographically capable wallet, a message or transaction incorporating the credential can also be digitally signed using a private key unique to the credential. This assures the receiver that the credential as presented was in the right hands.

Verifiable presentation proves the proper custody and control of the credential and is just as important as verifiability of a credential’s origin.

Telling the story behind the credential

Provenance and secure custody are unique assurances provided by verifiable credentials, but I think the greater power of this technology lies in the depth of the metadata.

VCs deliver rich ‘fine print’ about the credential, the issuer, the wallet and the way in which it was presented, all reliably bound together through digital signatures. So whenever you use a VC to access a resource or sign a piece of work, you leave behind an indelible mark that codifies the history of your credential.

As mentioned, a credential is issued through a formal process, and is recognised by a community of interest as signifying the suitability of its holder for something.

For a person to hold a verifiable credential in a personal cryptographic wallet, a series of specific steps must have taken place.

First and foremost, the Issuer will satisfy itself that the Subject meets all the credentialling requirements. A VC usually carries a public key unique to the Subject and their wallet; this physicality means the Issuer can be sure that it hands out its credentials only to the correct individuals. It also allows the Issuer to specify the precise type of device(s) used to carry its credentials — all the way down to smart phone model and biometric performance if those things matter under the Issuer’s security policy.

Virtual credit cards in digital wallets

Continuing our look at credit cards as credentials, the provisioning of virtual credit cards to mobile wallets illustrates the degree of control that a VC issuer has over the end-to-end process.

Typically, a virtual credit card is provisioned to a digital wallet via a mobile banking app running on the same device. Banks control over how their apps are activated. Almost anyone can download a banking app from an app store but only a genuine customer can get the app to do anything, following their bank’s prescribed activation steps (which might include e.g. entering account specific details, calling a contact centre, or even visiting a branch for additional checks). Only then will the bank send secure instructions to the device to load a virtual card. The customer will need to unlock their phone (by biometric or PIN) to complete the load.

Behind the scenes, any bank offering mobile phone credit cards must have also made prior arrangements with the phone manufacturer to gain access to the hardware. Apple and Google (the major digital wallet platforms) undertake rigorous due diligence so that only legitimate banks are granted this all-important power.

All this history is coded as metadata into the verifiable credential. When a merchant point-of-sale system receives a signed payment instruction from a digital wallet, we can all be sure that:

the digital wallet has been unlocked by someone who controls the phone the credit card is genuine and was issued by the bank indicated in the credential the card was loaded to the wallet by a customer who was approved to use the mobile banking app and was authenticated to do so (making it highly likely that the mobile phone customer and the cardholder are the same person) the cardholder is a registered customer of the bank and has passed that bank’s KYC processes.

The VC can include the type of phone it is carried in; it is even possible for the VC to record if the virtual card was issued remotely or in-person.

Minimalist VCs

The acute problem with online authentication today—often given the catch-all label “identity theft”— arises from the use of plaintext credentials and identifiers.

There are countless scenarios where a counterparty needs to know you have a particular credential, but if the only evidence you can provide is a plaintext number, then businesses and individuals alike are sitting ducks because so many identifiers have been stolen in data breaches and traded on black markets.

The simplest, lowest risk solution is to conserve the important IDs we are all familiar with, but harden them in digital form, so they cannot fall into criminal hands.

That might sound complicated, but we have done it before!

The transition from magnetic stripe to chip payment cards was made for exactly the same reason: to eliminate plaintext data.  Chip cards present cardholder data through digitally signed verifiable messages — making them one of the earliest examples of verifiable credentials.

Digital wallets use the same technology as chip cards and are rapidly taking over from plastic. The Reserve Bank reports that well over one third of card payments by Australian consumers are now made through mobile wallets. Yet as we have seen, the meaning and business context of credit cards were unchanged through the course of these technology upgrades. That conservation of credentialing processes was key to the chip revolution.

Minding your business

In any digital transformation, it is not the new technology that creates the most cost, delay and risk; rather it’s the business process changes. The greatest benefit of verifiable credentials is they can conserve the meaning of the IDs we are all familiar with, and all the underlying business rules.

The real power of VCs lies not in what they change but what they leave the same!

A minimalist verifiable credential carrying a government ID means nothing more and nothing less than the fact that the holder has been issued that ID. By keeping things simple, a VC avoids disturbing familiar trusted ways of dealing with people and businesses.

Powerful digital wallets are being rapidly embraced by consumers; modern web services are able to receive credentials from standards-based devices. We are ready to transform all important IDs from plaintext to verifiable credentials. Most people now could present any important verified data with a click in an app, with the same convenience, speed and safety as showing a payment card. With no change to backend processes and credentialing, we would cut deep into identity crime and defuse the black market in stolen data.

The post What do verifiable credentials verify? appeared first on Lockstep.

Tuesday, 20. August 2024

Spruce Systems

SpruceID Joins NIST National Cybersecurity Center of Excellence (NCCoE) to Accelerate Mobile Driver’s License Adoption

Learn about the current initiative, benefits of the mobile driver's license, and how SpruceID will collaborate with the NCCoE.

SpruceID is participating in the National Cybersecurity Center of Excellence (NCCoE) Accelerate Adoption of Digital Identities on Mobile Devices Consortium. This initiative will help define and facilitate a reference architecture for digital credentials that protect privacy, are implemented securely, enable equity, are widely adoptable, and are easy to use.

Understanding the Initiative

The National Institute of Standards and Technology (NIST) National Cybersecurity Center of Excellence (NCCoE) is a collaborative hub where industry, organizations, government agencies, and academic institutions work together to address businesses’ most pressing cybersecurity challenges.

The NCCoE is playing a pivotal role in expediting the adoption of mobile driver's license (mDL) standards and best practices. In partnership with technology vendors (including SpruceID), government agencies, regulatory bodies, standards organizations, and entities aiming to implement mDLs, the NCCoE is kicking off an initiative to build a reference architecture that showcases practical, real-world business use cases. This initiative will integrate mDLs with commercially available technologies and embed them into existing business processes:

“Whether boarding a plane, creating a bank account, or making an online purchase, mobile driver’s licenses (mDLs) and other digital credentials have the potential to improve the way we conduct transactions, both in person and online. To help realize this potential, the NCCoE is collaborating with more than a dozen partners from across the mDL ecosystem to build out reference implementations and to accelerate the adoption of mDL standards and best practices.” 

- Bill Fisher, co-lead of the NIST mDL project, NIST National Cybersecurity Center of Excellence

This reference implementation aims to promote standards and best practices for mDL deployments and address mDL adoption challenges. Over the next two years the project will produce guidance addressing:

Know Your Customer/Customer Identification Program Onboarding and Access which will demonstrate the use of an mDL and/or Verifiable Credentials (VC) for establishing and accessing an online financial account.  U.S. Federal Government Credential Service Provider (CSP) and Federation which will demonstrate the use of an mDL and/or VC for establishing a CSP account to access federated agency systems. Healthcare and Electronic Prescribe which will demonstrate the use of an mDL and/or VC for provider access and prescription uses. Benefits of the Mobile Driver’s License

Physical driver’s licenses were not designed for our online world. The current best practice for online identity verification asks users to take a picture of their driver’s license with a smartphone and to answer knowledge-based questions. The efficacy of these methods is being eroded by new technology, such as AI-generated images of driver’s licenses accurate enough to bypass document scanning tools and the ability of bad actors to get ahold of the information needed to answer knowledge-based questions.

mDLs function much like a traditional driver's license, carrying information such as name, date of birth, and address but in a digital format accessible through a dedicated mobile application, often referred to as a digital wallet. Compared to physical driver’s licenses, mDLs have several capabilities that make them easier to use with online and digital transactions:

mDLs are underpinned by public key cryptography, making the credential cryptographically verifiable. mDLs can be integrated natively with device biometrics for user verification. mDLs can communicate natively between two mobile applications but also in cross device flows between mobile applications and the web browser on a laptop or tablet. mDLs offer the potential for selective disclosure, allowing users to pick and choose which information to share with third parties.

Transactions at financial institutions, healthcare providers, government services, and many other organizations could benefit from enhanced customer experiences, more accurate identity verification, and reduced fraud if they supported mDLs.

How SpruceID will Collaborate with NCCoE

SpruceID is proud to have been selected to partner with the NCCoE to expedite the adoption of mobile driver’s license standards and best practices. Several of our contributions to this project will include:

Coordinate and collaborate with other parties to demonstrate success for the Financial Services Sector CIP/KYC use case, serving the primary role of a Wallet Provider. The use of our open-source libraries, including the SpruceKit Wallet, an application holding mDoc and Verifiable Credential that can interact over the internet and app-to-app using 18013-7 and OpenID4VP. Bring our expertise and learnings from interoperability test events that we previously hosted for ISO/IEC 18013-7 in August 2023 and from the development and deployment of the California DMV mobile driver’s license application.

We look forward to leveraging our unique knowledge and expertise to help drive this initiative forward.

Stay up to Speed

Interested in learning more and staying up to date with major milestones? Attend upcoming mDL events and follow along for updates on the NCCoE website mDL home page.

Attend Upcoming Events

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.


KuppingerCole

Some Direction for AI/ML-ess Marketing

by John Tolbert For the last few years, we have been inundated with messaging about Artificial Intelligence (AI). AI is no longer a term mostly used by academicians, IT professionals, or sci-fi fans. Those in the IT security field have seen AI, ML (Machine Learning), and Generative AI (GenAI) proliferating in marketing, while product developers look for ways to incorporate these technologies into

by John Tolbert

For the last few years, we have been inundated with messaging about Artificial Intelligence (AI). AI is no longer a term mostly used by academicians, IT professionals, or sci-fi fans. Those in the IT security field have seen AI, ML (Machine Learning), and Generative AI (GenAI) proliferating in marketing, while product developers look for ways to incorporate these technologies into products. Vendors touting some variation of artificial intelligence in their products have garnered more investment. There have been productivity gains. But has “AI/ML” as a marketing term peaked?

A recent study in the Journal of Hospitality Marketing & Management, titled “Adverse impacts of revealing the presence of “Artificial Intelligence (AI)” technology in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk” shows that consumers are put off by the use of “AI” in product marketing. Some of the reasons cited include a lack of trust for AI, a lack of transparency about AI usage, and concerns about privacy. Although this study focused on consumer goods and services, do the lessons learned apply to IT, and specifically cybersecurity?

 I recently returned from Black Hat 2024 in Las Vegas. While there was plenty of AI, ML, and GenAI signage in booths on the show floor, how vendors are marketing these technologies in products seems to be shifting a bit. Security practitioners are and have been aware of the presence and need for machine learning in products for many years. An example isthe use of ML detection models in Endpoint Protection Detection and Response (EPDR) products to identify new variants of malware. It is infeasible to build an EPDR solution today that does NOT use ML, given the volume of malware variants discovered every day. AI/ML is not new in the market, and it is not new to those of us working in the field. Perhaps this realization among product marketing teams is another reason why the messaging is changing and needs to evolve further.

2023 was certainly the year of GenAI, with large language models (LLM) capturing not only the attention of the public but also becoming mainstream tools. Vendors large and small rushed to find ways to get GenAI into products. Such objectives are innovative, and can result in improvements in usability, but not always. Customers of IT security solutions may be skeptical about unqualified claims of how GenAI improves those products.

Continuing with the EPDR example, several vendors have natural language query interfaces powered by GenAI, guided investigation tools for analysts informed by AI, and executive level reports drafted by GenAI. These have the potential to save time and improve organizational security posture for customers. However, there are concerns about the quality of the output. Can it be trusted? AI outputs have explainability problems. Moreover, since the outputs from AI tools depend on the quality and relevance of the data in their models, how are security vendors getting a sufficient quantity of relevant data, and how do they assess the veracity of the outputs of their LLM functions? How can customers be assured that data governance and security policies are applied to the data from their organizations?

In discussing LLMs, how they work, and answering questions about whether LLMs lie or hallucinate in the Journal of Ethics and Information Technology, Hicks, Humphries, and Slater state that LLMs are “not designed to represent the world at all; instead, they are designed to convey convincing lines of text.” In the proceedings of the 2022 Conference on Human Information Interaction and Retrieval, Bender and Shah said about LLMs: “No reasoning is involved […]. Similarly, language models are prone to making stuff up […] because they are not designed to express some underlying set of information in natural language; they are only manipulating the form of language.”

At this point, IT (and especially IT security) vendors and their product marketing teams would be better served by providing more information about their use of ML and GenAI in their solutions. Assume you have a tech savvy audience, because you do. What kinds of AI technology are you using? For which functions is it being used? Where are you getting data for model training? How are you doing quality control on the outputs before releasing it customers? These are the kinds of questions that buyers of security solutions have.

Join us in December in Frankfurt at our cyberrevolution conference, where we will continue to dissect how AI is used in cybersecurity.

See some of our other articles and videos on the use of AI in security:

Cybersecurity Resilience with Generative AI Generative AI in Cybersecurity – It's a Matter of Trust ChatGPT for Cybersecurity - How Much Can We Trust Generative AI? Asking Good Questions About AI Integration in Your Organization Asking Good Questions About AI Integration in Your Organization – Part II

Elliptic

Crypto regulatory affairs: Fed undertakes enforcement against Customers Bank for digital asset risk management gaps

The Federal Reserve Board has sent a warning to banks about the importance of addressing cryptoasset risk exposure through a recent and landmark enforcement action.

The Federal Reserve Board has sent a warning to banks about the importance of addressing cryptoasset risk exposure through a recent and landmark enforcement action.


1Kosmos BlockID

Four Ways to Align Authentication with Business Needs

In a hybrid world that blends on-premises and cloud-based resources, securing access to sensitive data and systems is no longer achieved by defending a perimeter, but through authentication. While authentication technologies have evolved over the past decades from their humble password origins, preventing unauthorized access still hinges on choosing and implementing the right identity-based contro

In a hybrid world that blends on-premises and cloud-based resources, securing access to sensitive data and systems is no longer achieved by defending a perimeter, but through authentication. While authentication technologies have evolved over the past decades from their humble password origins, preventing unauthorized access still hinges on choosing and implementing the right identity-based controls.

This involves navigating a landscape where knowledge-based, possession-based, biometric, and multi-factor authentication (MFA) methods offer a variety of advantages and limitations. Let’s consider each of the options available to organizations and how to select the right mix of controls to improve their security posture.

Knowledge-Based Authentication

Knowledge-based authentication (KBA), which encompasses passwords and PINs, is the most traditional form of authentication. Its widespread adoption and user familiarity make it a convenient starting point for many security protocols. However, its susceptibility to social engineering, phishing attacks, and the perennial issue of weak password creation by users necessitate a cautious approach. For environments where ease of use is paramount and risk levels are comparatively low, KBA can serve as a component of a more comprehensive security strategy, particularly when augmented with additional authentication factors.

Knowledge-based authentication (KBA) is best suited for environments with comparatively low risk levels, where ease of use is paramount and the accessed information is not highly sensitive or critical. It can serve as a supplementary authentication factor in conjunction with other methods, such as biometric or device-based authentication. Examples include accessing non-critical information, utilizing KBA alongside other authentication methods as a first factor, and implementing it in public Wi-Fi hotspots for streamlined user access without compromising security.

Possession-Based Authentication

Possession-based authentication methods require users to have a physical object, such as a security token or a mobile device, to gain access. This approach adds a tangible layer of security, making it harder for attackers to gain unauthorized access without physical possession of the required object. It’s particularly effective in scenarios where additional security is needed without significantly complicating the user experience, such as in financial transactions or access to high-security areas. However, the risk of loss or theft and the potential cost implications of deploying hardware devices must be considered.

Possession-based authentication methods offer heightened security measures for a range of scenarios, including financial transactions, remote work access, secure online transactions, and compliance-driven environments like legal and government agencies. In online banking, users require physical possession of a security token or mobile device to access their accounts securely. Similarly, in remote work settings, this method ensures that only authorized employees with designated devices can connect to corporate networks and sensitive data, mitigating risks associated with unauthorized access. Additionally, in e-commerce platforms and online payment systems, possession-based authentication enhances transaction security, reducing the risk of fraud and protecting sensitive financial information. Furthermore, compliance-driven industries can benefit from this approach to meet regulatory obligations and safeguard confidential information.

Biometrics

Biometric authentication offers a high-security level by utilizing unique user characteristics like fingerprints, facial recognition, or iris scans. This method is highly resistant to traditional hacking attempts and provides a seamless user experience. It is well-suited for environments where security cannot be compromised, such as in government or healthcare settings. Nevertheless, concerns around privacy, the potential for spoofing, and the need for compatible hardware investments can pose challenges. Organizations must weigh these factors against the critical need for secure and user-friendly authentication mechanisms.

Biometric authentication, which leverages unique user characteristics like fingerprints, facial recognition, or iris scans, is ideal for various high-security environments. It is best suited for secure access to sensitive data and fortifying high-risk online systems. Despite its advantages, organizations must consider privacy concerns, potential spoofing, and compatible hardware investments when deploying biometric authentication systems.

Multi-Factor Authentication (MFA)

MFA combines two or more authentication methods listed above to create a layered security approach, significantly enhancing protection against various threats. By integrating knowledge, possession, and biometric factors, MFA creates a dynamic defense mechanism that is much harder for attackers to bypass. This method is ideal for protecting sensitive data and critical systems, offering a balanced solution that addresses the vulnerabilities inherent in single-method authentication systems. While MFA introduces complexity and potential user resistance, its ability to significantly reduce security risks makes it a vital component of modern cybersecurity strategies.
Multi-factor authentication (MFA) is a versatile security method that finds applications across industries, serving to protect sensitive data and critical systems. More commonly, MFA is required to ensure secure access to corporate systems from outside the office, and in e-commerce platforms to safeguard customer accounts and high-risk customer and citizen transactions. Overall, MFA provides a defense mechanism against various threats, combining multiple authentication factors to significantly enhance security and mitigate risks inherent in single-method authentication systems.

Passwordless

Passwordless authentication represents a significant leap forward in cybersecurity, eliminating the vulnerabilities associated with traditional knowledge-based methods. The majority of authentication methods included in the above still require a user name AND password as a first step in authenticating users. But, by leveraging biometrics, mobile devices, or security keys, passwordless systems offer a user-friendly and highly secure alternative that reduces the risk of phishing, password theft, and unauthorized access. This method is particularly advantageous in creating a seamless user experience without compromising security, and ideal for environments aiming to minimize friction while maintaining high security standards. Organizations looking to bolster access security while enhancing user satisfaction should consider integrating passwordless authentication into their strategic security framework, offering an optimal balance between ease of use and robust protection.
Organizations across diverse sectors, particularly those looking for a better, more secure user experience, should carefully consider integrating passwordless authentication into their security frameworks. By leveraging biometrics, mobile devices, or security keys, passwordless systems offer a robust and user-friendly alternative to traditional password-based methods, effectively mitigating the risks associated with phishing, password theft, and unauthorized access. This approach not only enhances security posture but also fosters a seamless and efficient user experience, aligning with the modern landscape of digital operations where stringent security measures and user satisfaction are paramount.

Choosing the Right Strategy

The choice of authentication method should be driven by an organization’s specific needs, considering factors such as the sensitivity of the data, user experience requirements, and regulatory compliance mandates. Here are four key considerations for selecting the appropriate authentication method:

Risk Assessment: Evaluate the level of security risk associated with the data or systems being protected. Higher risk scenarios may warrant more stringent authentication methods, such as biometric or MFA. User Experience: Consider the impact on the user. While security is paramount, overly cumbersome authentication processes can lead to poor compliance and user frustration. Cost and Infrastructure: Assess the financial and infrastructure implications of deploying new authentication technologies. While advanced methods like biometric authentication offer enhanced security, they also come with higher implementation costs. Compliance Requirements: Ensure that the chosen authentication method aligns with industry regulations and standards, which may dictate specific security measures.

Defending against increasingly sophisticated cyber threats requires understanding the unique advantages and limitations of available authentication methods, and selecting the controls that are best aligned with organizational needs and user expectations. Using the methods described above can help define an authentication strategy that ensures security measures remain robust, responsive, and user-friendly.

The post Four Ways to Align Authentication with Business Needs appeared first on 1Kosmos.


Indicio

What is DIDComm? (With Pictures!)

The post What is DIDComm? (With Pictures!) appeared first on Indicio.

By Sam Curren

Trusted communication continues to be the internet’s critical missing component, even as our reliance on digital services like healthcare, mobile banking, and payments increases and where seamless, secure interactions are vital. While there are some applications and protocols designed to try to foster secure communication, they are narrow in scope and fail to broadly support the diverse types of communication we need. This shortcoming stems from their fragmented abilities and limited scope. They focus on specific areas of communication, such as simplifying complex login procedures or various security schemes, but they fail to allow the kinds of communication necessary for a variety of online activities. 

The result is an incomplete tech landscape, where direct, secure communication is not fully achievable, where users are left with a fragmented landscape of partial solutions, and a successful zero trust security practice continues to challenge even the most well-resourced organizations. Without holistic and user-friendly solutions that address these shortcomings, true, trusted, general communication on the internet remains an unfulfilled promise.

This is why more industries than ever are turning to decentralized identity and verifiable credentials to solve these missing pieces and why we’ve built DIDComm into the heart of Indicio Proven. While many other standards and protocols are developing to support the simple exchange of information using verifiable credentials, the vast majority of customer use cases that Indicio supports require both sides to authenticate, communicate, and build using the existing infrastructure they’ve already invested in. You can see deployments in travel, financial services, government and more.

The success comes from DIDComm

DIDComm, or DID Communication, is a protocol designed to enable secure and private communication between parties by using decentralized identifiers (DIDs). Unlike traditional methods for trusted connections, DIDComm provides a robust framework for mutual authentication and trusted communication, addressing the gaps in current technologies. DIDComm leverages Verifiable Credentials to add trust to long-term digital relationships. By integrating DIDComm into an existing tech stack or ecosystem, both end users and businesses benefit from enhanced security, privacy, and trust. 

For end users, DIDComm ensures that the communications they have with each other are not only encrypted but also authenticated. This means they are secure from malicious actors impersonating them but also impersonating the business or other entity they are communicating with. This benefits businesses and governments by facilitating secure and seamless interactions with customers, partners, and citizens, while reducing the risk of impersonation, mitigating fraud, and enhancing trust. 

The decentralized nature of DIDComm also means there is no reliance on a central authority, organization, or company to manage the process or facilitate identity (anyone can use software to create a DID with an endpoint for DIDComm and cryptographically prove they control their DID). This increases resilience and reduces security vulnerabilities with a zero trust enhanced architecture. 

Incorporating DIDComm into your digital identity strategy is a game-changing move as it means that all parties in an identity ecosystem or communication channel can confidently authenticate each other and exchange information securely. This removes a fundamental weakness in current identity verification and communication.

The value of DIDComm lies it its ability to enable:

Secure communication: Traditional forms of digital communication, such as email, are often not encrypted at all, likely passing in plain text, meaning anyone who can observe network traffic can read it. And while email can be helpful as it serves both as an identifier and a method to communicate, the lack of secure, easy-to-use encryption creates security vulnerabilities when it comes to relaying sensitive information, such as health and financial records. While there are ways to encrypt email, they are typically clunky and not user-friendly. DIDComm solves this security problem in a way that is user-friendly, offering seamless key management and encryption.

DIDComm also fulfills the need to communicate securely while authenticating the identities of the participants. It requires an identifier that is verifiable and adds the ability to communicate both securely and privately.

Direct connection: DIDComm changes the nature of how we interact online, allowing us to regain the ability to communicate directly with others on the internet without dependence on third party platforms. This direct connection restores the security and trust that were lost with the reliance on intermediaries, such as email clients or social media platforms.

Extensibility: Much like the internet itself, DIDComm is highly extensible. It can be enhanced with capabilities through the design of new protocols. This extensibility allows DIDComm to interact with various things, people, and systems, making it incredibly useful. And where APIs are convenient ways to build complex communication protocols into online interactions, they require constant connection between their source and the end user making them difficult to update and manage, especially if connectivity is lost. DIDComm is optimized for, and extremely compatible with, commonly used devices such as mobile phones and tablets.

Mutual authentication: Authentication from one side of a connection, which many traditional digital identity tools are capable of doing, is not enough. Both parties must be able to verify each other’s identities for there to be truly secure communication. But mutual authentication is rarely straightforward and often requires cumbersome setup and maintenance, which can deter widespread adoption. Applications and protocols also overlook the need for comprehensive privacy measures, failing to protect metadata or ensure data integrity across all layers of communication.

DIDComm enables mutual authentication, providing assurance to both parties in a communication channel that they are who they claim to be. While many existing systems authenticate one side of a connection, such as just identifying the customer or end user, it is equally important that the other side is also authenticated. Think about the phishing scams where fraudsters pretend to be your bank or other service in order for you to share your login information with a bogus website or login portal. DIDComm eliminates this. You’ll always know you are interacting with your bank.

Protocol interoperability: DIDComm can also be used alongside more focused protocols, such as OpenID4VC (which is limited to only the exchange of verifiable credentials and doesn’t provide a generalized method of communication). DIDComm goes beyond single purpose protocols and combines the power of verifiable credentials with extensible communication. The trust gained by the exchange of verifiable credentials can then be used to coordinate powerful interactions, secure messaging, and more.

Until DIDComm, the internet has been missing an easy, comprehensive solution for secure and trusted communication. Applications and protocols built on DIDComm support use cases ranging from communicating with government border authorities for the preclearance of international travelers to businesses and financial institutions offering customized products to customers.

To get involved with DIDComm, individuals and organizations can participate in the work of the Decentralized Identity Foundation (DIF), contributing to the development of standards and protocols, collaborate with industry leaders, stay informed about the latest advancements, and help shape the future of decentralized identity and secure communication.

Indicio has extensive experience with DIDComm, and we’d love to help you integrate Indicio Proven into your existing systems. Reach out to Indicio and learn how DIDComm can empower your organization.

The post What is DIDComm? (With Pictures!) appeared first on Indicio.


Aergo

Aergo V4 Update: New Timeline and Key Considerations

As we continue to refine and enhance the Aergo network, we want to update our community on the revised timeline for the upcoming V4 hard fork. This adjustment allows us to ensure full compatibility with our current enterprise customers and their nodes and address a few minor issues identified during testing. Why the Change? Enterprise Node/Network Compatibility: Our enterprise custome

As we continue to refine and enhance the Aergo network, we want to update our community on the revised timeline for the upcoming V4 hard fork. This adjustment allows us to ensure full compatibility with our current enterprise customers and their nodes and address a few minor issues identified during testing.

Why the Change? Enterprise Node/Network Compatibility: Our enterprise customers play a crucial role in the Aergo ecosystem, and it’s vital that their nodes integrate seamlessly with the upcoming hard fork. We’re taking additional time to thoroughly test and align the upgrade with their specific requirements to ensure this. Minor Issues Identified: During the final stages of testing, a few minor issues were identified that need to be addressed. While these issues do not impact the hard fork's core functionality, resolving them now will prevent potential disruptions and ensure a smooth transition for all participants.

So far, we’ve completed approximately 95% of our Aergo V4 test scripts, but a few tests are still pending to ensure everything functions as expected. This means we will not meet our previously communicated mainnet hard fork target date of the end of August.

New Timeline Current Phase: Ongoing Testing and Final Optimizations with 95% of the Work Completed Testnet Launch: Mid-September Mainnet Hard Fork: End of September

We will continue working with key participants, including node operators, exchanges, and other partners, to ensure all necessary preparations are completed ahead of the new timeline. This includes additional testing, further optimization, and ensuring the community is fully prepared for the transition.

While delays can be challenging, this additional time is essential to ensure the hard fork meets the high standards our clients and community expect. We appreciate your understanding and continued support as we work to deliver a more robust, more reliable Aergo network.

Stay tuned for more updates!

Aergo V4 Update: New Timeline and Key Considerations was originally published in Aergo blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


PingTalk

What Is Password Spraying and How Do You Prevent It?

Learn about password spraying attacks, how they work, and how to defend your organization against them with our comprehensive guide.

Password spraying is an account takeover (ATO) cyberattack where attackers use a single common password or a handful of common passwords to try to access many accounts. This method spreads out login attempts across numerous accounts, making it harder to detect and block.

 

By using password spraying, attackers can effectively take over user accounts, leading to unauthorized access and potential exploitation of sensitive information.

 

These attacks are increasingly common and can lead to data breaches, financial loss, and damage to your organization's reputation. Understanding password spraying and how to defend against it is key to maintaining security.

Monday, 19. August 2024

Microsoft Entra (Azure AD) Blog

Face Check is now generally available

Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of th

Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of the Microsoft Entra Suite, a complete identity solution that delivers Zero Trust access by combining network access, identity protection, governance, and identity verification capabilities.

 

 

  Unlocking high-assurance verifications at scale


There’s a growing risk of impersonation and account takeover. Bad actors use insecure credentials in 66% of attack paths. For example, impersonators may use a compromised password to fraudulently log in to a system. With advancements in generative AI, complex impersonation tactics such as deepfakes are growing as well. Many organizations regularly onboard new employees remotely and offer a remote help desk. Without strong identity verification, how can organizations know who is on the other side of these digital interactions? Impersonators can easily bypass common verification methods such as counting bicycles on a CAPTCHA or asking which street you grew up on. As fraud skyrockets for businesses and consumers, and impersonation tactics have become increasingly complex, identity verification has never been more important.


Microsoft Entra Verified ID is based on open standards, enabling organizations to verify the widest variety of credentials using a simple API. Verified ID integrates with some of the leading verification partners to verify identity attributes for individuals (for example, a driver’s license and a liveness match) across 192 countries. Today, hundreds of organizations rely on Verified ID to remotely onboard new users and reduce fraud when providing self-service recovery. For example, using Verified ID, Skype has reduced fraudulent cases of registering Skype Phone Numbers in Japan by 90%.

 

Face Check with Microsoft Entra Verified ID


Powered by Azure AI services, Face Check adds a critical layer of trust by matching a user’s real-time selfie and the photo on their Verified ID, which is usually from a trusted source such as a passport or driver’s license. By sharing only match results and not any sensitive identity data, Face Check strengthens an organization’s identity verification while protecting user privacy. It can detect and reject various spoofing techniques, including deepfakes, to fully protect your users’ identities.


BEMO, a security solution provider for SMBs, integrated Face Check into its help desk to increase verification accuracy, reduce verification time, and lower costs. The company used Face Check with Microsoft Entra Verified ID to protect its most sensitive accounts which belong to C-level executives and IT administrators.


Face Check not only helps BEMO improve customer security and strengthen user data privacy, but it also created a 90% efficiency improvement in addressing customer issues. BEMO’s help desk now completes a manual identity verification in 30 minutes, down from 5.5 hours before implementing Face Check.


“Security is always great when you apply it in layers, and this verification is an additional layer that we’ll be able to provide to our customers. It’s one more way we can help them feel secure.” – Jose Castelan, Support and Managed Services Team Lead, BEMO

 

Check out the video below to learn more about how your organization can use Face Check with Microsoft Entra Verified ID:

 

 

  Jumpstart with partners


Our partners specialize in implementing Face Check with Microsoft Entra Verified ID in specific use cases or verifying certain identity attributes such as employment status, education, or government-issued IDs (with partners like LexisNexis® Risk Solutions, Au10tix, and IDEMIA). These partners extend Verified ID’s capabilities to provide a variety of verification solutions that will work for your business’s specific needs.


Explore our partner gallery to learn more about our partners and how they can help you get started with Verified ID.

 

Start using Face Check with Microsoft Entra Verified ID


Face Check is a premium feature of Verified ID. After you set up your Verified ID tenant, there are two purchase options to enable Face Check and start verifying:


1. Begin the Entra Suite free trial, which includes 8 Face Check verifications per user per month.
2. Enable Face Check within Verified ID and pay $0.25 per verification.

 

Visit the Microsoft Entra pricing page for more details.

 

What’s Next?


Learn more about how Microsoft Entra Verified ID works and how organizations are using it today, and join us for the Microsoft Entra Suite Tech Accelerator on August 14 to learn about the latest identity management and end-to-end security innovations.

 

Ankur Patel, Head of Product for Microsoft Entra Verified ID

 

 

Read more on this topic 

Watch the Zero Trust spotlight Learn about the Microsoft Entra Suite Learn more about Face Check with Microsoft Entra Verified ID in the FAQ

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Microsoft Entra News and Insights | Microsoft Security Blog⁠Microsoft Entra blog | Tech CommunityMicrosoft Entra documentation | Microsoft Learn

Evernym

Multi-Factor Authentication: How It Defends Against Threats and Why It Matters

Multi-Factor Authentication: How It Defends Against Threats and Why It Matters In an era where cyber... The post Multi-Factor Authentication: How It Defends Against Threats and Why It Matters appeared first on Evernym.

Multi-Factor Authentication: How It Defends Against Threats and Why It Matters In an era where cyber threats are becoming increasingly sophisticated, securing access to systems and data is paramount. Multi-factor authentication (MFA) has emerged as a critical tool in enhancing security by adding layers of protection beyond traditional passwords. By requiring ...

The post Multi-Factor Authentication: How It Defends Against Threats and Why It Matters appeared first on Evernym.


Indicio

How verifiable credentials disrupt online fraud, phishing, and identity theft

The post How verifiable credentials disrupt online fraud, phishing, and identity theft appeared first on Indicio.

By Ken Ebert

Everyone’s online life begins with a user account, a login, and a password, which combined, turns into an identity. I am my email address — or social media account login. For the past twenty five years, life online has evolved by accumulating these digital identifiers. The more we have, the more we can do online. 

We don’t really own these digital identifiers: they’re lent to us on the assurance that we are who we claim to be, via the personal information we provide. This information is stored in a database along with lots of other people’s personal data so that they, too, can have a digital identifier.

This is how we identify each other on a network that was designed to manage computer identity rather than personal or organizational identity. It’s been amazingly successful at allowing billions of people to exist and interact online. Unfortunately, what it hasn’t been amazingly successful at is preventing all those people from having their identities stolen or faked.

One anecdote may be familiar: you get an email “from your bank.” Due to suspicious activity, your account has been locked and you need to log on to unlock it. You login (but not you, because you’d never be fooled by this, right?) and…  it’s not your bank. Whoever it is you’ve just given your login details to can now access your real bank account. Ninety percent of successful data breaches are a result of successful phishing.

Or maybe it doesn’t have to be this sophisticated: your password is 1,2,3,4,5 — and Malicious Actors Inc guess their way into your account. Or you reuse the same password across accounts and a data breach for one of these accounts means multiple accounts are now accessible to hackers.

And not just you. Once into a database, every account is compromised. The whole defense collapses if one access point is compromised. 

Identity fraud can also be sophisticated, such as someone using generative AI tools to create a deepfake of your biometrics or those of your boss — and you give them 25 million dollars, thinking you’re following legitimate directions.

Yes, there are security solutions like multifactor authentication, but they can only do so much, given that the underlying architecture of ‘account logins-passwords-databases’ is so hard to defend. And many people dislike the friction they add to online interaction, which is already burdened by an endless cycle of forgetting and resetting passwords. I recently joined a Teams meeting where I had to receive an email with a PIN code, experience two biometric checks, and supply a two-digit code from my authenticator app. 

A digital transformation in how we share and verify data
Here’s what verifiable credentials and decentralized identity do: They remove the underlying problem of user accounts, logins, passwords.

Instead of authenticating a user account through a login and password, a user is authenticated with a verifiable credential and cryptography. 

What is a verifiable credential? Think of it like an envelope for sealing and sharing digital information. The source of the envelope (the organization issuing the credential) can be cryptographically verified. The information in the envelope is digitally signed, which, in essence, means that any attempt to alter or tamper with the information breaks the seal and can be detected.

But this is only one of the elements in the new authentication ‘stack.’

You can accept and share a verifiable credential because the software in your digital wallet has created an address for it to be sent to. This address — a decentralized identifier or DID — is under your control and you can prove this control cryptographically when you interact with another DID. 

The combination of a DID and a verifiable credential enable you to prove that you are in control of a specific identity, and you can now attach any data to that identity by writing it to a credential.

The upshot is that people hold their data, authenticate themselves and each other cryptographically, and share data that can be trusted because we can know it hasn’t been altered (assuming that we trust the original source of the data).

This is the instantaneous magic behind seamless digital travel. A person takes their physical passport and — providing it has a chip — reads the information from the passport and converts it into a digital credential. The software also requires the person to do a liveness check with a selfie and then compares the selfie with the digital image from the passport chip. The passport data is authenticated as having come from a legitimate passport-issuing authority and the person is issued with a Digital Travel Credential (DTC) by an airline.

When a DTC is presented (touchlessly), the source of the DTC is instantly authenticated, along with the integrity of the data in the DTC. Additional biometric authentication and, of course, biometric access to the device, provide further confidence that the person presenting the DTC is the holder of a legitimate passport. 

The result is portable trust. Verifiable data can go from anywhere to everywhere — and so can you.

####

Sign up to our newsletter to stay up to date with the latest from Indicio and the decentralized identity community

The post How verifiable credentials disrupt online fraud, phishing, and identity theft appeared first on Indicio.


KuppingerCole

Oct 17, 2024: IAM meets ITDR: A Recipe for Robust Cybersecurity Posture

In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.  
In today's digital landscape, identity is at the forefront of enterprise security. With a growing number of cyberattacks originating from compromised identities, organizations must adopt an identity-first security approach. This approach emphasizes proactive measures over reactive responses, crucial for minimizing risks and safeguarding sensitive information.  

liminal (was OWI)

2024 Liminal Landscape: Your Blueprint for Market Leadership

The post 2024 Liminal Landscape: Your Blueprint for Market Leadership appeared first on Liminal.co.

Sunday, 18. August 2024

KuppingerCole

Eight Recommendations for CISOs in 2025

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Annie Bailey, Research Strategy Director at KuppingerCole Analysts, to discuss the key trends that will shape the cybersecurity landscape through 2025. The conversation explores the increasing complexity of the attack surface, the growing importance of resilience and recovery in cybersecurity strategies, and th

In this episode of the KuppingerCole Analyst Chat, host Matthias Reinwarth is joined by Annie Bailey, Research Strategy Director at KuppingerCole Analysts, to discuss the key trends that will shape the cybersecurity landscape through 2025. The conversation explores the increasing complexity of the attack surface, the growing importance of resilience and recovery in cybersecurity strategies, and the dual role of AI as both a threat and a defensive tool. In addition, the discussion covers the impact of emerging regulations, the need for advanced cybersecurity infrastructure, and how organizations can prepare for the anticipated challenges ahead.



Friday, 16. August 2024

Spruce Systems

SpruceID Joins Harvard and Microsoft Researchers for New “Personhood Credential” Proposal

Empowering humans is the best way to fight a coming wave of A.I.-powered fraud and disinformation.

Last week, Wayne Chang (CEO of SpruceID) and a broad coalition of researchers from Harvard, Microsoft, MIT, the Decentralized Identity Foundation (DIF), and other organizations released a major new proposal for fighting online disinformation and fraud. The proposed solution is a digital credential that would give internet users a powerful new tool for proving their authenticity online, while also ensuring strong privacy.

Our new paper proposes a “personhood credential,” or PHC, based on much the same cryptography-based digital credential technology that powers SpruceID’s mobile driver’s licenses in California and elsewhere. Much like SpruceID’s mDL deployments, the PHC system would reveal only the minimum necessary information about any user: in this case, simply that they are a human, not a bot or AI agent. The PHC would not disclose any identifying information, and is also designed to prevent cookie-like traceability. 

The credential would be an optional tool, primarily for specific users who want to establish a high level of credibility online while protecting their privacy, and for service providers who want to reduce fraud.

Why We Need to Prove Personhood Online

One major goal of the PHC is to distinguish authentic content on social media from deepfakes, coordinated manipulation, and other automated activity. Worries about inauthentic content online have been high for close to a decade now, but the recent advent of generative AI models, including their ability to mimic specific individuals on video, has created an even higher-risk environment for disinformation [link to fake election content piece].

Proving authenticity on the internet is difficult for technical reasons, and no truly good solution has ever emerged. That’s one reason online financial fraud and identity fraud have steadily accelerated, now costing individuals and institutions tens of billions of dollars annually. The rise of AI generated content, meanwhile, has triggered worries of a “dead internet” full of robots talking endlessly to one another.

A digital credential to demonstrate personhood could combat both disinformation and fraud, mitigate against denial-of-service attacks using automated “botnets,” and empower individuals to prove their authenticity–even if they wish to remain anonymous.

Harnessing the Power of Encryption for Online Authentication

The proposed new PHC system is fundamentally user-controlled. Among other features, that means:

1. The PHC is optional for all users.

2. It cannot reveal real-world identities.

3. Users can choose their PHC issuer.

Optionality: While any natural person could request and receive a PHC, a PHC would not (and in fact could not) be required to use the internet. Specific high-security websites or online services, such as banking portals, may choose to require the PHC as an anti-fraud measure. More generally, we expect PHC use and adoption to be driven from the bottom up by users who wish to prove their authenticity.

Anonymity and Pseudonymity: Crucially, the system is designed to prove only that the holder is a person, without transmitting any specific data, such as name, credit card, birth date, or location. This is possible because issuers confirm an applicant’s authenticity offline, then issue an anonymized PHC credential.

The digital credentials themselves are validated and secured by encrypted signatures. Related techniques are used to ensure that even these signed credentials are “unlinkable” – that is, that a user’s online activity cannot be tracked or collated.If the user desires, however, the PHC could also be used to preserve a single user identity over time.

Issuer Choice: Personhood credentials are issued and signed by an open network of PHC issuers, with measures to prevent the issuing of multiple credentials to a single person. The open issuer network ensures no issuer is able to abuse their power, for instance by limiting the uses a PHC is put to, or selecting who is eligible to receive one.

The Open PHC Issuer Network

It may seem counterintuitive that a proof of personhood credential can be trusted to a totally open network of self-selected issuers. While there are challenges and tradeoffs, we and our research coalition believe such a system strikes a balance: preserving democratic openness, while harnessing market dynamics to elevate the most trustworthy PHC issuers.

The alternative, restricting issuance only to already “trusted” issuers, would both restrict public access to the PHC credential, and create a “single point of failure” for the broader system. Potential failure conditions for a restricted-issuer system would include compromise by external hacking or internal subversion, such as the use of DMV staff privileges to gain unauthorized data access. Even worse, though, is the potential emergence of a “ministry of information” under which issuers control how PHCs are used to validate online content. 

To prevent those outcomes, the PHC credential must be available from a variety of sources. Different issuers will have different standards and procedures for proving user authenticity. These could range from government-issued identity documents and an in-person interview, to versions of decentralized identity relying on digital proofs of interactions like shopping and messaging, documented using digital proofs that can’t be faked by artificial intelligence.

By the same token, services seeking to validate humanity would be free to choose which issuers’ credentials to accept, unleashing competitive dynamics that would motivate provision of PHC services tailored for a variety of applications and users. For instance, a bank might require a PHC issued by a government entity, while a social media site could accept a less rigorous PHC. 

One challenge of the open issuer network is the risk that multiple issuers would issue PHCs to the same natural human, potentially allowing those additional credentials to be misused. This risk is still being tackled by researchers, but the possibility of multiple issuance still represents a significant improvement from the current, unlimited ability of bad actors to impersonate humans online.

Above all, the open nature of PHC issuance would prevent the accrual of more power to governments, providing a free-market alternative to governmental “ministries of truth” exercising anti-democratic information control.

Proving Humanity and Protecting the Information Commons

The internet is reaching a crisis point thanks to the continuing rise of spam, fraudulent content, data leakage, and hacking. The adoption of the PHC credential would benefit the entire digital information and security ecosystem, not merely those who hold or accept the credential.

The PHC would immediately distinguish authentic online content and interactions from automated manipulation, improving the online experience for many users without their own PHC. That’s both because the most authentic content would be easy to spot, and because the very existence of this new form of verification would disincentivize the creation of misleading content.

The PHC would provide this benefit without adding more personal data to “data hoards” likely to be targeted by hackers. Indeed, it’s these very large-scale hacks, such as the recent theft of 3 billion records, including government ID numbers, that are rapidly rendering “knowledge based” security measures obsolete, and better approaches necessary. In this compromised environment, adding the PHC as an access control tool for sensitive online applications would have a substantial impact on hacking and fraud.

For now, the personhood credential is a general proposal, with much work remaining both in designing the overall system and creating specific technical implementations. That means its benefits are still some time in the future, but the online fraud and disinformation it aims to address isn’t going anywhere – if anything, the situation seems poised to get worse. 

SpruceID is proud to have a hand in this major new proposal, and we’ll be contributing our expertise in identity, privacy and encryption to help bring it to fruition. If you see potential for the PHC to strengthen your organization’s digital efforts, please reach out – we’d be excited to learn about your needs, and help you prepare for a more authentic online future.

Read the Full Paper

About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions. Learn more on our website.


paray

Risks of Non-Compliance with FinCEN’s BOI Reporting Rule

Non-compliance with FinCEN’s Beneficial Ownership Information (BOI) reporting requirement could expose your business to significant financial and legal risks. Here’s what you need to know about the potential consequences of failing to comply with this critical regulation. FinCEN has the authority to impose hefty fines on businesses failing to meet the BOI reporting requirement. Penalties … Continue
Non-compliance with FinCEN’s Beneficial Ownership Information (BOI) reporting requirement could expose your business to significant financial and legal risks. Here’s what you need to know about the potential consequences of failing to comply with this critical regulation. FinCEN has the authority to impose hefty fines on businesses failing to meet the BOI reporting requirement. Penalties … Continue reading Risks of Non-Compliance with FinCEN’s BOI Reporting Rule →

Dock

The EU Digital Identity Wallet: A Beginner's Guide

With the approval of eIDAS 2, 400 million EU citizens will soon have a EU Digital Identity Wallet containing legal credentials issued by their national governments.  The shift from physical documents to digital IDs is one of the most significant changes in identity history. This evolution requires

With the approval of eIDAS 2, 400 million EU citizens will soon have a EU Digital Identity Wallet containing legal credentials issued by their national governments. 

The shift from physical documents to digital IDs is one of the most significant changes in identity history. This evolution requires ID companies to adapt, innovate, and reimagine the possibilities of digital verification.

The EU Digital Identity Wallet provides a secure and versatile storage for digital credentials. It aims to simplify digital interactions across borders while ensuring interoperability and user control.

In this post, we cover the details of the EU Digital Identity Wallet, including its features, benefits, and applications, so that you gain a comprehensive understanding of it.

Let's dive in: https://www.dock.io/post/eu-digital-identity-wallet


Civic

Tokenized Identity: Permissioned vs Permissionless Assets on Solana with Austin Federa, Solana Foundation

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Austin Federa, Head of Strategy at Solana Foundation. They explore the world of permissioned and permissionless assets on Solana, when builders need to move the dial towards adding restrictions to comply with real-world regulations and how this can bring more web2 […] The post Tokenized Identity: Permissi

In this episode of Tokenized Identity, Titus Capilnean, our VP of Go-To-Market, speaks with Austin Federa, Head of Strategy at Solana Foundation. They explore the world of permissioned and permissionless assets on Solana, when builders need to move the dial towards adding restrictions to comply with real-world regulations and how this can bring more web2 […]

The post Tokenized Identity: Permissioned vs Permissionless Assets on Solana with Austin Federa, Solana Foundation appeared first on Civic Technologies, Inc..


Dock

Dock implements BBS as the default signature algorithm in the Anonymous Credentials format

Technology standards are always changing, and it can be expensive for products to keep up. The rate of change is even faster for new technologies with emerging standards, such as the standards for verifiable credentials that are used to create reusable digital identities. Our customers don’t have to

Technology standards are always changing, and it can be expensive for products to keep up. The rate of change is even faster for new technologies with emerging standards, such as the standards for verifiable credentials that are used to create reusable digital identities. Our customers don’t have to worry because our APIs hide the changes in the underlying credential standards. During the April 2024 Internet Identity Workshop, Kazue Sako from Waseda University provided an update on recent developments in BBS cryptography which serves as a good example of the complexity hidden by our products.

Dock’s Anonymous Credentials use an advanced cryptographic signature algorithm that was invented in 2004 and is known as BBS. BBS signatures support advanced privacy capabilities like unlinkable selective disclosure, while also being faster and smaller than other signature algorithms with similar capabilities. However, when BBS was originally proposed no one knew how to mathematically prove the security of the algorithm. Various modifications were made to BBS signatures to make it easier to prove their correctness, and in 2016 a version of the algorithm called BBS+ proved to be efficient enough to be widely used in verifiable credentials. We used BBS+ signatures when we first implemented our Anonymous Credentials format.

A paper published in 2023 includes a proof for the original BBS algorithm while also proposing some efficiency improvements compared to the BBS+ approach to verification of signatures with selective disclosure. Now that BBS signatures are known to be correct, we can use them instead of the BBS+ variant and benefit from the reduced computation requirements. The 2023 variant of BBS replaced BBS+ as the target of standardization at the IETF. We implemented support for BBS2023 last fall, and recently made it the default signature algorithm in the Anonymous Credentials format. This change is transparent to our customers who now use the best version of the algorithm when issuing new credentials while we also ensure that existing credentials remain verifiable.

As you follow our release notes and roadmap updates, you’ll see additional examples of how we track the evolution of identity technologies so that our customers don’t have to.


Gartner Rebuttal: Why Decentralized ID can improve KYC Compliance

In Gartner’s recently released 2024 Market Guide for Decentralized Identity, they suggest that organizations looking to improve their compliance processes with decentralized identity technologies should adopt a skeptical stance. They say: A significant number of vendors claim to have the functionality within their DCI solution to comply with

In Gartner’s recently released 2024 Market Guide for Decentralized Identity, they suggest that organizations looking to improve their compliance processes with decentralized identity technologies should adopt a skeptical stance. They say:

A significant number of vendors claim to have the functionality within their DCI solution to comply with KYC and AML regulations. DCI vendors see this as crucial for making KYC and AML compliance processes more efficient. However, Gartner’s view is that, at this time, banks cannot make a good business case for transitioning away from their traditional compliance process, regardless of its inherent challenges.

At Dock Labs, we regularly speak with organizations who are unhappy with the costs and pains associated with KYC and AML compliance. These forward-thinking organizations find that reusable identity credentials provide them with essential tools to lower the costs of verifying individuals, and improve the experience of the users onboarding to their systems. They get these benefits without increasing fraud or compliance risk while simultaneously improving their compliance with privacy requirements and reducing the cost of protecting user data.

The difference in perspective is that these innovative organizations don’t see DCI as a replacement for existing compliance processes, but as new tools that can augment what is working now. With verifiable credentials as part of their toolbox, IAM practitioners can assemble a better solution than can be obtained solely with traditional compliance processes.

For example, think about opening a savings account online. You will likely be required to follow a traditional approach to compliance which requires a number of steps to verify your identity:

Take a picture of your national identity document and a selfie in order to validate your legal name. That legal name must then be checked against a watchlist of sanctioned people. You will then be asked to enter your mailing information, which will be validated with an address service. You then have to enter a phone number which will be verified by sending you a text message that you must enter into the web site. You will also be asked to enter an email address, which will be verified by sending you a link that you have to click on.

At this point you can finally set up your account. After recently completing this process with a family member, we were offered the opportunity to open a credit card with a partner bank. But we gave up when we found that we would need to go through the whole process again.

I wished that the savings bank would have issued us a credential that would be accepted by the partner bank showing that our legal name, tax number, mailing information, phone number, and email address had already been validated. Accepting the data through a credential would have saved us the hassle of data entry and re-validation, while also ensuring that the partner bank is only using data that has been verified by a trusted source according to the rules of their partnership agreement.

It is true that using credentials does not remove the partner bank’s duty to record their basis for trusting the information. Particularly sensitive checks, such as the watchlist check, may need to be repeated. The referring bank may also charge a fee for the use of the identity credentials that they issued. Regardless, the credential-enabled process is much less painful for everyone involved.

Even Gartner acknowledges that decentralized identity technologies can help streamline regulatory compliance. We wholeheartedly agree with the advice they give near the end of their report, when they say:

Although regulations were initially expected to erect barriers to the adoption of DCI in heavily regulated industries like financial services, new DCI use cases allow organizations to comply with them. SRM leaders should explore how DCI can enable them to comply with regulations more easily, privately, and securely than conventional means.

We at Dock Labs are happy to help organizations stay ahead of their competitors by improving their KYC and AML compliance today.


PingTalk

Session Hijacking - How It Works and How to Prevent It

Learn about session hijacking, detection methods, and prevention techniques to safeguard your digital assets.

A session hijacking attack is one of the more common ways in which malicious actors can commit fraud. It allows black hat hackers to completely bypass secure authentication mechanisms, including multi-factor authentication (MFA) and others. This, in turn, grants access to a user’s secured accounts and systems, which can give attackers free reign to steal sensitive data. These types of attacks pose a serious threat to cybersecurity, both on an individual and organizational scale. The ramifications can include extensive financial losses and long-term damage to an organization’s reputation.

 

You may not be able to prevent your organization from being targeted by session hijacking attacks, but there are steps you can take to recognize these attacks and stop them in their tracks. Keep reading to explore the hallmarks of session hijacking, the various ways it can be attempted, and the prevention methods you can deploy to protect your users and your business.


BlueSky

Highlighting Community Starter Packs

Join a starter pack today!

In June, we released starter packs — personalized invites that allow you to bring friends directly into your slice of Bluesky.

Check out and join some of the starter packs that the Bluesky community has created!

I've made a start, only a few here so far so will keep searching - but if anyone knows any UK MPs I've missed let me know and I will add go.bsky.app/FACCR8t #ukpolitics

[image or embed]

— Geoff (@geoffdeburca.bsky.social) Aug 13, 2024 at 2:31 AM

New here and like comics? Well @gregpak.bsky.social has you covered! Here are two starter sets of folks to follow! First a bunch of creators go.bsky.app/R4eqmGf

[image or embed]

— Adam P. Knave (@adampknave.com) Aug 13, 2024 at 7:44 AM

I have made a ChemSky starter pack and am posting here to help boost visibility. This list is not exhaustive, but should hopefully help newcomers or rejoiners find some accounts and feeds to follow go.bsky.app/C9BtrLj

[image or embed]

— Laura Howes (@laurahowes.bsky.social) Aug 15, 2024 at 11:44 AM

I made a starter pack for those fleeing #EduTwitter and joining #EduSky which should let you find a bunch of good people. go.bsky.app/HQHD4R1

[image or embed]

— Caroline Spalding (@mrsspalding.bsky.social) Aug 15, 2024 at 6:34 AM

Calling all folk with an interest in UK public policy: I’ve created a starter pack of think tankers, policy analysts & commentators active on @bsky.app go.bsky.app/LtNiL1o

[image or embed]

— Jessica Studdert (@jesstud.bsky.social) Aug 14, 2024 at 7:46 AM

starter pack of OC artists who are under 100 followers at the time of making this list! 🩷 go.bsky.app/6LGDx5g

[image or embed]

— Saba 🏳️‍🌈 (@ace-of-dragons.bsky.social) Aug 14, 2024 at 11:36 AM

Starter pack for #nufc fans here. go.bsky.app/HmjNT4

[image or embed]

— Kev Lawson (@editkev.football) Aug 11, 2024 at 2:09 PM

I love this starter pack business, so I've made one of some of the women I follow on here (including the estate of Ursula K Le Guin because I'm obsessed). I'm sure I'm missing a ton of great people. Anyone else I should include? go.bsky.app/2rubRr3

[image or embed]

— Alona Ferber (@aloner.bsky.social) Aug 15, 2024 at 6:21 AM

Starter Pack for Seismology and Earthquake people. Add missing accounts in the comments and I'll add them to the pack! ⚒️🧪 #Geology go.bsky.app/ND4oS9k

[image or embed]

— Henning ⚒️ (@geohenning.bsky.social) Aug 12, 2024 at 11:28 AM

Find more communities directly on Bluesky! See you there: bsky.app.